Re: Fwd: Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-14 Thread Adam Heath
Adam Heath wrote:
 
 
 
 Subject:
 Re: Woop! Confluence data imported into git and displayed with webslinger!
 From:
 Adam Heath doo...@brainfood.com
 Date:
 Wed, 13 Oct 2010 17:08:07 -0500
 To:
 Raj Saini rajsa...@gmail.com
 
 To:
 Raj Saini rajsa...@gmail.com
 
 
 On 10/13/2010 12:03 AM, Raj Saini wrote:

 Then, with the backend code and template files stored in the
 filesystem, the actual content itself is also stored in the
 filesystem. Why have a different storage module for the content, then
 you do for the application?

 I don't think it is a code idea to store your code and data together.
 Data is some thing which you need to take regular backup and your code
 is generally in binary form and reproducible easily such as deploying a
 war or jar file.
 
 That's what overlay filesystems are for.
 
 /job/brainfood-standard-base and /job/brainfood-shop-base have all our
 code.  /job/$clientname-$job-name then has the content for the site. We
 have an overlay filesystem implementation written for commons-vfs that
 merges all that together into a single view.

Here is an example of what I mean by overlay:

==
doo...@host:/job/brainfood-standard-base/pages/BASE/www/account$ tree
.
├── Actions.whtml
├── left.vm
├── left.vm@
│   └── no-edit
├── Menu.whtml
├── newsletters
│   ├── Header.whtml
│   ├── index.groovy
│   ├── index.groovy@
│   │   ├── is-filter
│   │   └── require-login
│   ├── List.vm
│   ├── SuccessMessage.whtml
│   ├── Title.whtml
│   ├── View.vm
│   └── View.vm@
│   └── no-edit
├── password
│   ├── index.groovy
│   ├── index.groovy@
│   │   ├── is-filter
│   │   └── require-login
│   ├── SuccessMessage.whtml
│   ├── Title.whtml
│   ├── View.vm
│   └── View.vm@
│   └── no-edit
├── profile
│   ├── contact-action-EMAIL_ADDRESS.vm
│   ├── contact-action-POSTAL_ADDRESS.vm
│   ├── contact-action-TELECOM_NUMBER.vm
│   ├── contact-edit-EMAIL_ADDRESS.vm
│   ├── contact-view-EMAIL_ADDRESS.vm
│   ├── contact-view-POSTAL_ADDRESS.vm
│   ├── contact-view-TELECOM_NUMBER.vm
│   ├── index.groovy
│   ├── index.groovy@
│   │   ├── is-filter
│   │   └── require-login
│   ├── Title.whtml
│   ├── View.vm
│   └── View.vm@
│   ├── no-edit
│   └── PageStyle
└── Title.whtml
doo...@host:/job/brainfood-shop-base/pages/SHOP/www/account$ tree
.
├── contactmech
│   ├── index.groovy
│   ├── index.groovy@
│   │   └── is-filter
│   ├── PostalAddress.vm
│   └── TelecomNumber.vm
├── destinations
│   ├── index.groovy
│   ├── index.groovy@
│   │   ├── is-filter
│   │   └── require-login
│   ├── NotYetCreated.whtml
│   ├── Orders.vm
│   ├── RelationDisplay.vm
│   ├── Rename.vm
│   ├── Rename.vm@
│   │   └── no-edit
│   ├── Title.whtml
│   ├── View.vm
│   └── View.vm@
│   └── no-edit
├── order
│   ├── Actions.whtml
│   ├── index.groovy
│   ├── index.groovy@
│   │   ├── is-filter
│   │   └── require-login
│   ├── Print.vm
│   ├── Print.vm@
│   │   └── no-edit
│   ├── View.vm
│   └── View.vm@
│   └── no-edit
├── orders
│   ├── index.groovy
│   ├── index.groovy@
│   │   ├── is-filter
│   │   └── require-login
│   ├── NoOrders.whtml
│   ├── View.vm
│   └── View.vm@
│   └── no-edit
└── paymentmethods
├── index.groovy
├── index.groovy@
│   ├── is-filter
│   └── require-login
├── Title.whtml
├── View.vm
└── View.vm@
└── no-edit
doo...@host:/job/content-site/pages/CONTENT/www/account$ tree
.
├── destinations
│   └── View.vm@
│   ├── PageStyle
│   └── section-Left
├── Menu.whtml
├── newsletters
│   └── View.vm@
│   ├── PageStyle
│   └── section-Left
├── order
│   └── View.vm@
│   ├── PageStyle
│   └── section-Left
├── orders
│   └── View.vm@
│   ├── PageStyle
│   └── section-Left
├── password
│   └── View.vm@
│   ├── PageStyle
│   └── section-Left
├── paymentmethods
│   └── View.vm@
│   ├── PageStyle
│   └── section-Left
└── profile
└── View.vm@
├── PageStyle
└── section-Left
==
You'll note that standard-base and shop-base have different account
folders.  base has completely generic screens, while shop has screens
that only make sense for online shopping sites.

Then, the content folder has the same directories that exist in both
of the bases, but is very light-weight, and only sets certain
configuration parameters.

All three of these directory structures are then merged at runtime, to
provide a unified view.  A higher-level tree can add new files, or
replace files.  It's also possible to delete a file in a lower level;
the standard term for this is called a whiteout.

Each main top-level folder(/job/foo is the root) is a separately
maintained git repository.

This has allowed us to share code between all of our sites, so that
features and fixes can be rapidly deployed, while allowing the
customer's content site to only contain what is required to configure
the bases.


Re: Fwd: Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-14 Thread Raj Saini
So how this is going to work one code is deployed inside a application 
server? For example as a war or ear or let us say inside the component 
folder of OFBiz?




Here is an example of what I mean by overlay:

==
doo...@host:/job/brainfood-standard-base/pages/BASE/www/account$ tree
.
├── Actions.whtml
├── left.vm
├── left.vm@
│   └── no-edit
├── Menu.whtml
├── newsletters
│   ├── Header.whtml
│   ├── index.groovy
│   ├── index.groovy@
│   │   ├── is-filter
│   │   └── require-login
│   ├── List.vm
│   ├── SuccessMessage.whtml
│   ├── Title.whtml
│   ├── View.vm
│   └── View.vm@
│   └── no-edit
├── password
│   ├── index.groovy
│   ├── index.groovy@
│   │   ├── is-filter
│   │   └── require-login
│   ├── SuccessMessage.whtml
│   ├── Title.whtml
│   ├── View.vm
│   └── View.vm@
│   └── no-edit
├── profile
│   ├── contact-action-EMAIL_ADDRESS.vm
│   ├── contact-action-POSTAL_ADDRESS.vm
│   ├── contact-action-TELECOM_NUMBER.vm
│   ├── contact-edit-EMAIL_ADDRESS.vm
│   ├── contact-view-EMAIL_ADDRESS.vm
│   ├── contact-view-POSTAL_ADDRESS.vm
│   ├── contact-view-TELECOM_NUMBER.vm
│   ├── index.groovy
│   ├── index.groovy@
│   │   ├── is-filter
│   │   └── require-login
│   ├── Title.whtml
│   ├── View.vm
│   └── View.vm@
│   ├── no-edit
│   └── PageStyle
└── Title.whtml
doo...@host:/job/brainfood-shop-base/pages/SHOP/www/account$ tree
.
├── contactmech
│   ├── index.groovy
│   ├── index.groovy@
│   │   └── is-filter
│   ├── PostalAddress.vm
│   └── TelecomNumber.vm
├── destinations
│   ├── index.groovy
│   ├── index.groovy@
│   │   ├── is-filter
│   │   └── require-login
│   ├── NotYetCreated.whtml
│   ├── Orders.vm
│   ├── RelationDisplay.vm
│   ├── Rename.vm
│   ├── Rename.vm@
│   │   └── no-edit
│   ├── Title.whtml
│   ├── View.vm
│   └── View.vm@
│   └── no-edit
├── order
│   ├── Actions.whtml
│   ├── index.groovy
│   ├── index.groovy@
│   │   ├── is-filter
│   │   └── require-login
│   ├── Print.vm
│   ├── Print.vm@
│   │   └── no-edit
│   ├── View.vm
│   └── View.vm@
│   └── no-edit
├── orders
│   ├── index.groovy
│   ├── index.groovy@
│   │   ├── is-filter
│   │   └── require-login
│   ├── NoOrders.whtml
│   ├── View.vm
│   └── View.vm@
│   └── no-edit
└── paymentmethods
 ├── index.groovy
 ├── index.groovy@
 │   ├── is-filter
 │   └── require-login
 ├── Title.whtml
 ├── View.vm
 └── View.vm@
 └── no-edit
doo...@host:/job/content-site/pages/CONTENT/www/account$ tree
.
├── destinations
│   └── View.vm@
│   ├── PageStyle
│   └── section-Left
├── Menu.whtml
├── newsletters
│   └── View.vm@
│   ├── PageStyle
│   └── section-Left
├── order
│   └── View.vm@
│   ├── PageStyle
│   └── section-Left
├── orders
│   └── View.vm@
│   ├── PageStyle
│   └── section-Left
├── password
│   └── View.vm@
│   ├── PageStyle
│   └── section-Left
├── paymentmethods
│   └── View.vm@
│   ├── PageStyle
│   └── section-Left
└── profile
 └── View.vm@
 ├── PageStyle
 └── section-Left
==
You'll note that standard-base and shop-base have different account
folders.  base has completely generic screens, while shop has screens
that only make sense for online shopping sites.

Then, the content folder has the same directories that exist in both
of the bases, but is very light-weight, and only sets certain
configuration parameters.

All three of these directory structures are then merged at runtime, to
provide a unified view.  A higher-level tree can add new files, or
replace files.  It's also possible to delete a file in a lower level;
the standard term for this is called a whiteout.

Each main top-level folder(/job/foo is the root) is a separately
maintained git repository.

This has allowed us to share code between all of our sites, so that
features and fixes can be rapidly deployed, while allowing the
customer's content site to only contain what is required to configure
the bases.

   




Re: Fwd: Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-14 Thread Adam Heath

On 10/14/2010 12:53 PM, Raj Saini wrote:

So how this is going to work one code is deployed inside a application
server? For example as a war or ear or let us say inside the component
folder of OFBiz?


The top-level ofbiz-component.xml defines the top-level webapp.  The 
web.xml configured there then has WebslingerServlet configured, mapped 
to /*.


All requests, for *all* hostnames, are then sent to webslinger.  It 
has entities in the database to map multiple hostnames to a single 
WebslingerServer instance.  There is a suffix table, that chops of 
domain name parts from the end, then a mapping table that takes what 
is left to a particular instance.  There is also support for * mapping 
as a prefix, plus different top-level webapp context paths have their 
own configuration.  $siteName.localhost/, $siteName.$desktopName/, 
www.$domainName.com/ are mapped by having 'localhost', '$desktopName' 
in WebslingerHostSuffix, then in WebslingerHostMapping, having 
'$siteName'/, 'www.$domainName.com'/ in WebslingerHostMapping.


WebslingerServer then has a commons-vfs file url.  We generally have 
it as 
'file:///job/$clientName-$jobName/pages/${CLIENT-NAME}-${JOB-NAME}'. 
In that folder, there tends to be a www/, Config/, and Var/.  www/ 
then as a WEB-INF/, and the rest of the content files.  (we'd created 
commons-vfs compatible lookups for standard ofbiz 'location' urls too, 
ie: ofbiz-component://ecommerce/webslinger-site)


The parent servlet container is *not* used at all to serve *any* 
files.  The only thing used from the parent container is http 
processing, and logging.  Webslinger does everything else itself.


This is done by making webslinger into a servlet container; a 
container that can be installed into *other* containers.  We call this 
a nested container.  This has meant that webslinger's code base is 
rather larger than other content systems.  But it allows us to install 
standard servlets, and still make use of the overlay filesystem feature.


For instance, we have the jsp servlet installed inside webslinger, and 
when it needs to parse a .jsp file, the actual location of the file 
can come from any of a number of overlayed directories.


Then, we have a git repository configured at 
/job/$clientName-$siteName.  In there, there is the above mentioned 
pages folder, and as a sibling there is an ofbiz folder, that has a 
standard component.  That is linked into the ofbiz hot-deploy folder. 
 We can then use that to add job-specific entity definitions, 
services, scripts, etc.  Because ofbiz doesn't support runtime 
addition of components, it does require restarting, which is a little 
annoying.


Webslinger itself does support adding new sites and mappings at 
runtime tho.


Does that help answer your question?  Or is it just a bunch of greek, 
or too much information?






Here is an example of what I mean by overlay:

==
doo...@host:/job/brainfood-standard-base/pages/BASE/www/account$ tree
.
├── Actions.whtml
├── left.vm
├── left.vm@
│ └── no-edit
├── Menu.whtml
├── newsletters
│ ├── Header.whtml
│ ├── index.groovy
│ ├── index.groovy@
│ │ ├── is-filter
│ │ └── require-login
│ ├── List.vm
│ ├── SuccessMessage.whtml
│ ├── Title.whtml
│ ├── View.vm
│ └── View.vm@
│ └── no-edit
├── password
│ ├── index.groovy
│ ├── index.groovy@
│ │ ├── is-filter
│ │ └── require-login
│ ├── SuccessMessage.whtml
│ ├── Title.whtml
│ ├── View.vm
│ └── View.vm@
│ └── no-edit
├── profile
│ ├── contact-action-EMAIL_ADDRESS.vm
│ ├── contact-action-POSTAL_ADDRESS.vm
│ ├── contact-action-TELECOM_NUMBER.vm
│ ├── contact-edit-EMAIL_ADDRESS.vm
│ ├── contact-view-EMAIL_ADDRESS.vm
│ ├── contact-view-POSTAL_ADDRESS.vm
│ ├── contact-view-TELECOM_NUMBER.vm
│ ├── index.groovy
│ ├── index.groovy@
│ │ ├── is-filter
│ │ └── require-login
│ ├── Title.whtml
│ ├── View.vm
│ └── View.vm@
│ ├── no-edit
│ └── PageStyle
└── Title.whtml
doo...@host:/job/brainfood-shop-base/pages/SHOP/www/account$ tree
.
├── contactmech
│ ├── index.groovy
│ ├── index.groovy@
│ │ └── is-filter
│ ├── PostalAddress.vm
│ └── TelecomNumber.vm
├── destinations
│ ├── index.groovy
│ ├── index.groovy@
│ │ ├── is-filter
│ │ └── require-login
│ ├── NotYetCreated.whtml
│ ├── Orders.vm
│ ├── RelationDisplay.vm
│ ├── Rename.vm
│ ├── Rename.vm@
│ │ └── no-edit
│ ├── Title.whtml
│ ├── View.vm
│ └── View.vm@
│ └── no-edit
├── order
│ ├── Actions.whtml
│ ├── index.groovy
│ ├── index.groovy@
│ │ ├── is-filter
│ │ └── require-login
│ ├── Print.vm
│ ├── Print.vm@
│ │ └── no-edit
│ ├── View.vm
│ └── View.vm@
│ └── no-edit
├── orders
│ ├── index.groovy
│ ├── index.groovy@
│ │ ├── is-filter
│ │ └── require-login
│ ├── NoOrders.whtml
│ ├── View.vm
│ └── View.vm@
│ └── no-edit
└── paymentmethods
├── index.groovy
├── index.groovy@
│ ├── is-filter
│ └── require-login
├── Title.whtml
├── View.vm
└── View.vm@
└── no-edit
doo...@host:/job/content-site/pages/CONTENT/www/account$ tree
.
├── destinations
│ └── View.vm@
│ ├── PageStyle
│ └── section-Left
├── Menu.whtml
├── newsletters
│ 

Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-13 Thread Jacques Le Roux

Scott Gray wrote:

On 13/10/2010, at 5:23 AM, Adam Heath wrote:


On 10/12/2010 11:06 AM, Adrian Crum wrote:

On 10/12/2010 8:55 AM, Adam Heath wrote:

On 10/12/2010 10:25 AM, Adrian Crum wrote:

Actually, a discussion of database versus filesystem storage of content
would be worthwhile. So far there has been some hyperbole, but few
facts.


How do you edit database content? What is the procedure? Can a simple
editor be used? By simple, I mean low-level, like vi.

How do you find all items in your content store that contain a certain
text word? Can grep and find be used?

How do you handle moving changes between a production server, that is
being directly managed by the client, and multiple developer
workstations, which then all have to go first to a staging server? Each
system in this case has its own set of code changes, and config+data
changes, that then have to be selectively picked for staging, before
finally being merged with production.

What about revision control? Can you go back in time to see what the
code+data looked like? Are there separate revision systems, one for the
database, and another for the content? And what about the code?

For users/systems that aren't capable of using revision control, is
there a way for them to mount/browse the content store? Think nfs/samba
here.

Storing everything directly into the filesystem lets you reuse existing
tools, that have been perfected over countless generations of man-years.


I believe Jackrabbit has WebDAV and VFS front ends that will accommodate
file system tools. Watch the movie:

http://www.day.com/day/en/products/crx.html


Front end it wrong.  It still being the store itself is in some other 
system(database).  The raw store needs to be managed by
the filesystem, so standard tools can move it between locations, or do backups, 
etc.  Putting yet another layer just to emulate
file access is the wrong way.  


brainstorming
Let's make a content management system.  Yeah!  Let's do it!  So, we need to be 
able to search for content, and mantain links
between relationships.  Let's write brand new code to do that, and put it in the database. 


Ok, next, we need to pull the information out of the database, and serve it 
thru an http server.  Oh, damn, it's not running
fast.  Let's have a cache that resides someplace faster than the database.  Oh, 
I know, memory!  Shit, it's using too much
memory.  Let's put the cache in the filesystem.  Updates now remove the cache, 
and have it get rebuilt.  That means read-only
access is faster, but updates then have to rebuild tons of stuff.   


Hmm.  We have a designer request to be able to use photoshop to edit images.  
The server in question is a preview server, is
hosted, and not on his immediate network.  Let's create a new webdav access method, to make the content look like a filesystem. 


Our system is getting heavily loaded.  Let's have a separate database server, 
with multiple web frontends.  Cool, that works.

The system is still heavily loaded, we need a super-huge database server.

Crap, still falling over.  Time to have multiple read-only databases.
/brainstorming

or...

brainstorming
Let's store all our content into the filesystem.  That way, things like 
ExpanDrive(remote ssh fs access for windows) will work
for remote hosted machines.  Caching isn't a problem anymore, as the raw store 
is in files.  Servers have been doing file
sharing for decades, it's a well known problem.  Let's have someone else 
maintain the file sharing code, we'll just use it to
support multiple frontends.  And, ooh, our designers will be able to use the tools they are familiar with to manipulate things. 
And, we won't have the extra code running to maintain all the stuff in the multiple databases.  Cool, we can even use git, with
rebase and merge, to do all sorts of fancy branching and push/pulling between multiple development scenarios. /brainstorming  


If the raw store was in the filesystem in the first place, then all this 
additional layering wouldn't be needed, to make the
final output end up looking like a filesystem, which is what was being replaced all along. 


To be honest it makes it a little difficult to take you seriously when you 
completely disregard the JCR/Jackrabbit approach
without even the slightest hint of objectivity 
if (!myWay) {

return highway;
}
The JCR was produced by an expert working group driven largely by Day Software 
which has Roy T. Fielding as their chief
scientist.  While I know next to nothing about what constitutes a great CMS 
infrastructure I cannot simply accept that you are
right and they are wrong especially when you make no attempt whatsoever to 
paint the full picture, I mean are you suggesting that
a file system based CMS has no downsides?  Your approach is filled with pros and their's all cons?   


Regards
Scott


Minor detail, but I think Roy T. Fielding was appointed chief scientist after 
the JCR has been produced

Jacques



Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-13 Thread Scott Gray
On 13/10/2010, at 8:00 PM, Jacques Le Roux wrote:

 Scott Gray wrote:
 On 13/10/2010, at 5:23 AM, Adam Heath wrote:
 On 10/12/2010 11:06 AM, Adrian Crum wrote:
 On 10/12/2010 8:55 AM, Adam Heath wrote:
 On 10/12/2010 10:25 AM, Adrian Crum wrote:
 Actually, a discussion of database versus filesystem storage of content
 would be worthwhile. So far there has been some hyperbole, but few
 facts.
 How do you edit database content? What is the procedure? Can a simple
 editor be used? By simple, I mean low-level, like vi.
 How do you find all items in your content store that contain a certain
 text word? Can grep and find be used?
 How do you handle moving changes between a production server, that is
 being directly managed by the client, and multiple developer
 workstations, which then all have to go first to a staging server? Each
 system in this case has its own set of code changes, and config+data
 changes, that then have to be selectively picked for staging, before
 finally being merged with production.
 What about revision control? Can you go back in time to see what the
 code+data looked like? Are there separate revision systems, one for the
 database, and another for the content? And what about the code?
 For users/systems that aren't capable of using revision control, is
 there a way for them to mount/browse the content store? Think nfs/samba
 here.
 Storing everything directly into the filesystem lets you reuse existing
 tools, that have been perfected over countless generations of man-years.
 I believe Jackrabbit has WebDAV and VFS front ends that will accommodate
 file system tools. Watch the movie:
 http://www.day.com/day/en/products/crx.html
 Front end it wrong.  It still being the store itself is in some other 
 system(database).  The raw store needs to be managed by
 the filesystem, so standard tools can move it between locations, or do 
 backups, etc.  Putting yet another layer just to emulate
 file access is the wrong way.  brainstorming
 Let's make a content management system.  Yeah!  Let's do it!  So, we need 
 to be able to search for content, and mantain links
 between relationships.  Let's write brand new code to do that, and put it 
 in the database. Ok, next, we need to pull the information out of the 
 database, and serve it thru an http server.  Oh, damn, it's not running
 fast.  Let's have a cache that resides someplace faster than the database.  
 Oh, I know, memory!  Shit, it's using too much
 memory.  Let's put the cache in the filesystem.  Updates now remove the 
 cache, and have it get rebuilt.  That means read-only
 access is faster, but updates then have to rebuild tons of stuff.   Hmm.  
 We have a designer request to be able to use photoshop to edit images.  The 
 server in question is a preview server, is
 hosted, and not on his immediate network.  Let's create a new webdav access 
 method, to make the content look like a filesystem. Our system is getting 
 heavily loaded.  Let's have a separate database server, with multiple web 
 frontends.  Cool, that works.
 The system is still heavily loaded, we need a super-huge database server.
 Crap, still falling over.  Time to have multiple read-only databases.
 /brainstorming
 or...
 brainstorming
 Let's store all our content into the filesystem.  That way, things like 
 ExpanDrive(remote ssh fs access for windows) will work
 for remote hosted machines.  Caching isn't a problem anymore, as the raw 
 store is in files.  Servers have been doing file
 sharing for decades, it's a well known problem.  Let's have someone else 
 maintain the file sharing code, we'll just use it to
 support multiple frontends.  And, ooh, our designers will be able to use 
 the tools they are familiar with to manipulate things. And, we won't have 
 the extra code running to maintain all the stuff in the multiple databases. 
  Cool, we can even use git, with
 rebase and merge, to do all sorts of fancy branching and push/pulling 
 between multiple development scenarios. /brainstorming  If the raw store 
 was in the filesystem in the first place, then all this additional layering 
 wouldn't be needed, to make the
 final output end up looking like a filesystem, which is what was being 
 replaced all along. 
 To be honest it makes it a little difficult to take you seriously when you 
 completely disregard the JCR/Jackrabbit approach
 without even the slightest hint of objectivity if (!myWay) {
return highway;
 }
 The JCR was produced by an expert working group driven largely by Day 
 Software which has Roy T. Fielding as their chief
 scientist.  While I know next to nothing about what constitutes a great CMS 
 infrastructure I cannot simply accept that you are
 right and they are wrong especially when you make no attempt whatsoever to 
 paint the full picture, I mean are you suggesting that
 a file system based CMS has no downsides?  Your approach is filled with pros 
 and their's all cons?   Regards
 Scott
 
 Minor detail, but I think Roy T. Fielding was 

Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-13 Thread Jacques Le Roux

Scott Gray wrote:

On 13/10/2010, at 8:00 PM, Jacques Le Roux wrote:


Scott Gray wrote:

On 13/10/2010, at 5:23 AM, Adam Heath wrote:

On 10/12/2010 11:06 AM, Adrian Crum wrote:

On 10/12/2010 8:55 AM, Adam Heath wrote:

On 10/12/2010 10:25 AM, Adrian Crum wrote:

Actually, a discussion of database versus filesystem storage of content
would be worthwhile. So far there has been some hyperbole, but few
facts.

How do you edit database content? What is the procedure? Can a simple
editor be used? By simple, I mean low-level, like vi.
How do you find all items in your content store that contain a certain
text word? Can grep and find be used?
How do you handle moving changes between a production server, that is
being directly managed by the client, and multiple developer
workstations, which then all have to go first to a staging server? Each
system in this case has its own set of code changes, and config+data
changes, that then have to be selectively picked for staging, before
finally being merged with production.
What about revision control? Can you go back in time to see what the
code+data looked like? Are there separate revision systems, one for the
database, and another for the content? And what about the code?
For users/systems that aren't capable of using revision control, is
there a way for them to mount/browse the content store? Think nfs/samba
here.
Storing everything directly into the filesystem lets you reuse existing
tools, that have been perfected over countless generations of man-years.

I believe Jackrabbit has WebDAV and VFS front ends that will accommodate
file system tools. Watch the movie:
http://www.day.com/day/en/products/crx.html

Front end it wrong.  It still being the store itself is in some other 
system(database).  The raw store needs to be managed by
the filesystem, so standard tools can move it between locations, or do backups, 
etc.  Putting yet another layer just to emulate
file access is the wrong way.  brainstorming
Let's make a content management system.  Yeah!  Let's do it!  So, we need to be 
able to search for content, and mantain links
between relationships.  Let's write brand new code to do that, and put it in 
the database. Ok, next, we need to pull the
information out of the database, and serve it thru an http server.  Oh, damn, 
it's not running
fast.  Let's have a cache that resides someplace faster than the database.  Oh, 
I know, memory!  Shit, it's using too much
memory.  Let's put the cache in the filesystem.  Updates now remove the cache, 
and have it get rebuilt.  That means read-only
access is faster, but updates then have to rebuild tons of stuff.   Hmm.  We 
have a designer request to be able to use
photoshop to edit images.  The server in question is a preview server, is
hosted, and not on his immediate network.  Let's create a new webdav access 
method, to make the content look like a
filesystem. Our system is getting heavily loaded.  Let's have a separate 
database server, with multiple web frontends.  Cool,
that works. The system is still heavily loaded, we need a super-huge database 
server.
Crap, still falling over.  Time to have multiple read-only databases.
/brainstorming
or...
brainstorming
Let's store all our content into the filesystem.  That way, things like 
ExpanDrive(remote ssh fs access for windows) will work
for remote hosted machines.  Caching isn't a problem anymore, as the raw store 
is in files.  Servers have been doing file
sharing for decades, it's a well known problem.  Let's have someone else 
maintain the file sharing code, we'll just use it to
support multiple frontends.  And, ooh, our designers will be able to use the 
tools they are familiar with to manipulate
things. And, we won't have the extra code running to maintain all the stuff in 
the multiple databases.  Cool, we can even use
git, with rebase and merge, to do all sorts of fancy branching and push/pulling 
between multiple development scenarios.
/brainstorming  If the raw store was in the filesystem in the first place, 
then all this additional layering wouldn't be
needed, to make the final output end up looking like a filesystem, which is 
what was being replaced all along.

To be honest it makes it a little difficult to take you seriously when you 
completely disregard the JCR/Jackrabbit approach
without even the slightest hint of objectivity if (!myWay) {
   return highway;
}
The JCR was produced by an expert working group driven largely by Day Software 
which has Roy T. Fielding as their chief
scientist.  While I know next to nothing about what constitutes a great CMS 
infrastructure I cannot simply accept that you are
right and they are wrong especially when you make no attempt whatsoever to 
paint the full picture, I mean are you suggesting
that a file system based CMS has no downsides?  Your approach is filled with 
pros and their's all cons?   Regards
Scott


Minor detail, but I think Roy T. Fielding was appointed chief scientist after 
the JCR has been produced




Fwd: Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-13 Thread Adam Heath
 ---BeginMessage---

On 10/13/2010 12:03 AM, Raj Saini wrote:



Then, with the backend code and template files stored in the
filesystem, the actual content itself is also stored in the
filesystem. Why have a different storage module for the content, then
you do for the application?


I don't think it is a code idea to store your code and data together.
Data is some thing which you need to take regular backup and your code
is generally in binary form and reproducible easily such as deploying a
war or jar file.


That's what overlay filesystems are for.

/job/brainfood-standard-base and /job/brainfood-shop-base have all our 
code.  /job/$clientname-$job-name then has the content for the site. 
We have an overlay filesystem implementation written for commons-vfs 
that merges all that together into a single view.
---End Message---


Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-13 Thread Ean Schuessler
I agree that databases are very, very powerful but they also introduce
fundamental limitations. It depends on your priorities.

For instance, we've found that the processes companies pursue for
editing documentation can be every bit as fluid, complex and partitioned
as source code. I'd ask you, as a serious thought experiment, to
consider what the ramifications of managing OFBiz itself in a Jackrabbit
repository. Please don't just punt on me and say oh, well source code
is different. That's an argument by dismissal and glosses over
real-world situations where you might have a pilot group editing a set
of process documentation based on the core corporate standards, folding
in changes from HEAD as well as developing their own changes in
conjunction. I've just personally found that the distributed revision
control function is fundamental to managing the kinds of real content
that ends up on websites. Maybe you haven't.

Scott Gray wrote:
 This isn't about casting stones or attempting to belittle webslinger, which I 
 have no doubt is a fantastic piece of work and meets its stated goals 
 brilliantly.  This is about debating why it should be included in OFBiz as a 
 tightly integrated CMS and how well webslinger's goals match up with OFBiz's 
 content requirements (whatever they are, I don't pretend to know).  
 Webslinger was included in the framework with little to no discussion and I'm 
 trying to take the opportunity to have that discussion now.

 I'm not trying to add FUD to the possibility of webslinger taking a more 
 active role in OFBiz, I'm just trying to understand what is being proposed 
 and what the project stands to gain or lose by accepting that proposal.

 Version control with git and the ability to edit content with vi is great but 
 what are we giving up in exchange for that?  Surely there must be something 
 lacking in a file system approach if the extremely vast majority of CMS 
 vendors have shunned it in favor of a database (or database + file system) 
 approach?  I just cannot accept that all of these vendors simply said durp 
 durp RDMBS! durp durp.  What about non-hierarchical node linking? Content 
 meta-data? Transaction management? Referential integrity? Node types?
   
-- 
Ean Schuessler, CTO
e...@brainfood.com
214-720-0700 x 315
Brainfood, Inc.
http://www.brainfood.com



Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-13 Thread Scott Gray
For me it all comes to down to a couple of basic but very important points:
- Webslinger by your own admission takes a vastly different approach from 
anything else on the market and you're asking the OFBiz community to take that 
risk along with you and ignore what everyone else is doing.
- Webslinger has no community behind it and is the product and vision of a 
single company (and within that probably only a single developer understands it 
deeply).  OFBiz takes a big risk by depending upon it in any meaningful way for 
bugfixes, support and documentation, both now and in the future.  Name me one 
other major external library in OFBiz that doesn't come from a well established 
open source community.

I don't pretend for a second to be an expert on the topic of content management 
but I can see those risks staring me in the face.  At the end of the day if the 
community wants webslinger then they'll get it but blindly ignoring the risks 
does no one any good.

Regards
Scott

On 14/10/2010, at 12:34 PM, Ean Schuessler wrote:

 I agree that databases are very, very powerful but they also introduce
 fundamental limitations. It depends on your priorities.
 
 For instance, we've found that the processes companies pursue for
 editing documentation can be every bit as fluid, complex and partitioned
 as source code. I'd ask you, as a serious thought experiment, to
 consider what the ramifications of managing OFBiz itself in a Jackrabbit
 repository. Please don't just punt on me and say oh, well source code
 is different. That's an argument by dismissal and glosses over
 real-world situations where you might have a pilot group editing a set
 of process documentation based on the core corporate standards, folding
 in changes from HEAD as well as developing their own changes in
 conjunction. I've just personally found that the distributed revision
 control function is fundamental to managing the kinds of real content
 that ends up on websites. Maybe you haven't.
 
 Scott Gray wrote:
 This isn't about casting stones or attempting to belittle webslinger, which 
 I have no doubt is a fantastic piece of work and meets its stated goals 
 brilliantly.  This is about debating why it should be included in OFBiz as a 
 tightly integrated CMS and how well webslinger's goals match up with OFBiz's 
 content requirements (whatever they are, I don't pretend to know).  
 Webslinger was included in the framework with little to no discussion and 
 I'm trying to take the opportunity to have that discussion now.
 
 I'm not trying to add FUD to the possibility of webslinger taking a more 
 active role in OFBiz, I'm just trying to understand what is being proposed 
 and what the project stands to gain or lose by accepting that proposal.
 
 Version control with git and the ability to edit content with vi is great 
 but what are we giving up in exchange for that?  Surely there must be 
 something lacking in a file system approach if the extremely vast majority 
 of CMS vendors have shunned it in favor of a database (or database + file 
 system) approach?  I just cannot accept that all of these vendors simply 
 said durp durp RDMBS! durp durp.  What about non-hierarchical node 
 linking? Content meta-data? Transaction management? Referential integrity? 
 Node types?
 
 -- 
 Ean Schuessler, CTO
 e...@brainfood.com
 214-720-0700 x 315
 Brainfood, Inc.
 http://www.brainfood.com
 



smime.p7s
Description: S/MIME cryptographic signature


Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-13 Thread BJ Freeman
In the Early Nineties I was Hired to Take the MSDN portion of Microsoft 
into a document Library type of design.
A user would give a link to a Document, the app would then parse the 
document for various type of mime and store them on a files system on 
the network, was well as have key word search and Associative links from 
one Document to another.
This was a Network Wide system that covered many offices of Microsoft 
all over the world.

it was more a SGML before HTML AND XML became defacto.
It was Database centric in  all the Network references to the pieces 
were in the Database.  The Department would setup its defaults for where 
their document pieces were stored.


A department that was doing development in a particular area could call 
up all the references already in the database and associate them with 
their work.


Still to me this is the best marriage between the two worlds.
I see this along with using Open office to do the actual User interface 
as the way to have a robust Document system.


I also believe the basic model for Ofbiz document container could be 
enhanced to allow file type of documents in the Data resources.






Ean Schuessler sent the following on 10/13/2010 4:34 PM:


I agree that databases are very, very powerful but they also introduce
fundamental limitations. It depends on your priorities.

For instance, we've found that the processes companies pursue for
editing documentation can be every bit as fluid, complex and partitioned
as source code. I'd ask you, as a serious thought experiment, to
consider what the ramifications of managing OFBiz itself in a Jackrabbit
repository. Please don't just punt on me and say oh, well source code
is different. That's an argument by dismissal and glosses over
real-world situations where you might have a pilot group editing a set
of process documentation based on the core corporate standards, folding
in changes from HEAD as well as developing their own changes in
conjunction. I've just personally found that the distributed revision
control function is fundamental to managing the kinds of real content
that ends up on websites. Maybe you haven't.

Scott Gray wrote:

This isn't about casting stones or attempting to belittle webslinger, which I 
have no doubt is a fantastic piece of work and meets its stated goals 
brilliantly.  This is about debating why it should be included in OFBiz as a 
tightly integrated CMS and how well webslinger's goals match up with OFBiz's 
content requirements (whatever they are, I don't pretend to know).  Webslinger 
was included in the framework with little to no discussion and I'm trying to 
take the opportunity to have that discussion now.

I'm not trying to add FUD to the possibility of webslinger taking a more active 
role in OFBiz, I'm just trying to understand what is being proposed and what 
the project stands to gain or lose by accepting that proposal.

Version control with git and the ability to edit content with vi is great but what are we 
giving up in exchange for that?  Surely there must be something lacking in a file system 
approach if the extremely vast majority of CMS vendors have shunned it in favor of a 
database (or database + file system) approach?  I just cannot accept that all of these 
vendors simply said durp durp RDMBS! durp durp.  What about non-hierarchical 
node linking? Content meta-data? Transaction management? Referential integrity? Node 
types?



Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-12 Thread Marc Morin
With all the other technologies in ofbiz, seems like webslinger just adds more 
stuff onto the pile.  I don't want to argue the technical merits of database or 
file system persistence for a CMS, but it appears like ofbiz would benefit from 
reducing the number of technologies used, and increase the amount of re-use of 
technologies it already has.

So, for me, that means entity/service/screen/presentment models are the core 
technologies.   Galvanizing initiatives around those appear to provide leverage.

Now don't get me wrong, the CMS that is native in ofbiz is incomplete and 
needs a lot of work...  and for our use case of providing self edited web sites 
and ecommerce sites, that appears a better starting point.  We have done things 
to add self editing etc... but we need to put a lot more effort into that to 
ensure that there is a real solution.

my $0.02.


Marc Morin
Emforium Group Inc. 
ALL-IN Software
519-772-6824 ext 201 
mmo...@emforium.com 

- Original Message -
 On 10/11/2010 10:07 PM, Nico Toerl wrote:
  On 10/12/10 01:41, Adam Heath wrote:
 
  snip
  Now, here it comes. The url to the site.
  http://ofbizdemo.brainfood.com/.
 
  Things to note. There are *no* database calls *at all*. It's all
  done with files on disk. History browsing is backed by git, using
  jgit to read it directly in java. CSS styling is rather poor. Most
  unimplemented pages should do something nice(instead of a big read
  'Not Yet Implemented'); at least there shouldn't be an exceptions
  on those pages.
 
  that sounded real interesting and i thought i have to have a look at
  this, unfortunately all i got is:
 
 
 HTTP Status 500 -
 
  
 
  *type* Exception report
 
  *message*
 
  *description* _The server encountered an internal error () that
  prevented it from fulfilling this request._
 
  *exception*
 
  java.lang.NullPointerException
  
  WEB_45$INF.Events.System.Request.DetectUserAgent_46$jn.run(DetectUserAgent.jn:166)
 
 Hmm, nice, thanks.
 
 Your user-agent is:
 
 Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-GB; rv:1.9.2.9)
 Gecko/20100824 Firefox/3.6.9
 
 The (x86_64) is what is causing the problem, I hadn't seen this type
 of string in the wild. The regex doesn't like nested (). It's fixed
 now.


Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-12 Thread Adrian Crum
Actually, a discussion of database versus filesystem storage of content 
would be worthwhile. So far there has been some hyperbole, but few facts.


-Adrian

On 10/12/2010 7:32 AM, Marc Morin wrote:

With all the other technologies in ofbiz, seems like webslinger just adds more 
stuff onto the pile.  I don't want to argue the technical merits of database or 
file system persistence for a CMS, but it appears like ofbiz would benefit from 
reducing the number of technologies used, and increase the amount of re-use of 
technologies it already has.

So, for me, that means entity/service/screen/presentment models are the core 
technologies.   Galvanizing initiatives around those appear to provide leverage.

Now don't get me wrong, the CMS that is native in ofbiz is incomplete and 
needs a lot of work...  and for our use case of providing self edited web sites and 
ecommerce sites, that appears a better starting point.  We have done things to add self 
editing etc... but we need to put a lot more effort into that to ensure that there is a 
real solution.

my $0.02.


Marc Morin
Emforium Group Inc.
ALL-IN Software
519-772-6824 ext 201
mmo...@emforium.com

- Original Message -

On 10/11/2010 10:07 PM, Nico Toerl wrote:

On 10/12/10 01:41, Adam Heath wrote:

snip

Now, here it comes. The url to the site.
http://ofbizdemo.brainfood.com/.

Things to note. There are *no* database calls *at all*. It's all
done with files on disk. History browsing is backed by git, using
jgit to read it directly in java. CSS styling is rather poor. Most
unimplemented pages should do something nice(instead of a big read
'Not Yet Implemented'); at least there shouldn't be an exceptions
on those pages.


that sounded real interesting and i thought i have to have a look at
this, unfortunately all i got is:


HTTP Status 500 -



*type* Exception report

*message*

*description* _The server encountered an internal error () that
prevented it from fulfilling this request._

*exception*

java.lang.NullPointerException

WEB_45$INF.Events.System.Request.DetectUserAgent_46$jn.run(DetectUserAgent.jn:166)


Hmm, nice, thanks.

Your user-agent is:

Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-GB; rv:1.9.2.9)
Gecko/20100824 Firefox/3.6.9

The (x86_64) is what is causing the problem, I hadn't seen this type
of string in the wild. The regex doesn't like nested (). It's fixed
now.




Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-12 Thread Adam Heath

On 10/11/2010 06:58 PM, Scott Gray wrote:

On 12/10/2010, at 12:37 PM, Adam Heath wrote:


On 10/11/2010 06:26 PM, Scott Gray wrote:

On 12/10/2010, at 11:45 AM, Adam Heath wrote:


On 10/11/2010 04:25 PM, Scott Gray wrote:

On 12/10/2010, at 10:03 AM, Adam Heath wrote:


On 10/11/2010 02:37 PM, Jacques Le Roux wrote:

Impressive, now I know what Webslinger is and what it is capable of!


Actually, this is just one application.  Webslinger(-core) is an enabling 
technology, that enables anything to be written quickly.  As I said, I've only 
spent probably 2 actual weeks on the application itself.


The main question in my mind is what does all this mean for OFBiz?  Obviously 
because webslinger is currently in the framework you envisage it playing some 
sort of role in the ERP applications, but what exactly?


It means that webslinger could run all of cwiki.apache.org, being fully java 
dynamic.  The front page is currently giving me 250req/s with single 
concurrency, and 750req/s with a concurrency of 5.  And, ofbiz would be running 
along side, so that we could do other things as well.


That wasn't what I was asking but since you mention it, what does
that actually mean for us?  Part of reason we moved to the ASF was
so that we could rely on their infrastructure instead of maintaining
our own.  Assuming we replaced confluence with webslinger then what
do we do if you disappear from the scene in a year's time?  The idea
of learning a new obscure tool doesn't sound very appealing.


Who said that this was going to stay a brainfood-only project?


No one and I didn't make that assumption.


We have every intention of making webslinger(-core) a public, community 
project.  There isn't anything really like this.


I'm sure every dead open source project had the intention of building a 
thriving community but it doesn't always work out that way.  What I am asking 
is what will the OFBiz documentation gain by being hosted on webslinger(-core?) 
that makes it worth the risk of the project being abandoned and us having to 
move it all back to confluence or whatever the ASF is using by then?

And what is (-core)?  Does that imply that there is a
webslinger(-pro) edition that OFBiz users can take advantage of
by contracting with or licensing from brainfood?  I don't think
a little skepticism  is out of order when you tell us how wonderful
it would be for OFBiz to include webslinger if your company stands
to benefit from its inclusion.  I'm not even saying that's a bad
thing, I just prefer to have the full picture.


Yeah, I'm not surprised you picked up on that.  It's a very good question.

webslinger-core is the enabler.  It has no application logic.  It just 
makes it simple(r) to write applications.  It would be like taking the 
entityengine, serviceengine, all the widget systems, and the 
controller, but with *no* config files, no entitymodel, no service 
definitions, etc.  webslinger-core is the system-level classes, and 
nothing else.


Webslinger, however, is the combination of the core, and all the 
runtime classes/css/html templates.  This would be similiar to the 
actions, ftl, and widget definitions, but only in framework.


Finally, a webslinger application would then be everything that exists 
in an ofbiz applications/foo or specialpurpose/foo folder.


There will never be a difference in webslinger-core, between an 
internal/external system.  It would take more time to try and keep 
those things separate.


Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-12 Thread Adam Heath

On 10/12/2010 10:25 AM, Adrian Crum wrote:

Actually, a discussion of database versus filesystem storage of content
would be worthwhile. So far there has been some hyperbole, but few facts.


How do you edit database content?  What is the procedure?  Can a 
simple editor be used?  By simple, I mean low-level, like vi.


How do you find all items in your content store that contain a certain 
text word?  Can grep and find be used?


How do you handle moving changes between a production server, that is 
being directly managed by the client, and multiple developer 
workstations, which then all have to go first to a staging server? 
Each system in this case has its own set of code changes, and 
config+data changes, that then have to be selectively picked for 
staging, before finally being merged with production.


What about revision control?  Can you go back in time to see what the 
code+data looked like?  Are there separate revision systems, one for 
the database, and another for the content?  And what about the code?


For users/systems that aren't capable of using revision control, is 
there a way for them to mount/browse the content store?  Think 
nfs/samba here.


Storing everything directly into the filesystem lets you reuse 
existing tools, that have been perfected over countless generations of 
man-years.




-Adrian

On 10/12/2010 7:32 AM, Marc Morin wrote:

With all the other technologies in ofbiz, seems like webslinger just
adds more stuff onto the pile. I don't want to argue the technical
merits of database or file system persistence for a CMS, but it
appears like ofbiz would benefit from reducing the number of
technologies used, and increase the amount of re-use of technologies
it already has.

So, for me, that means entity/service/screen/presentment models are
the core technologies. Galvanizing initiatives around those appear to
provide leverage.

Now don't get me wrong, the CMS that is native in ofbiz is
incomplete and needs a lot of work... and for our use case of
providing self edited web sites and ecommerce sites, that appears a
better starting point. We have done things to add self editing etc...
but we need to put a lot more effort into that to ensure that there is
a real solution.

my $0.02.


Marc Morin
Emforium Group Inc.
ALL-IN Software
519-772-6824 ext 201
mmo...@emforium.com

- Original Message -

On 10/11/2010 10:07 PM, Nico Toerl wrote:

On 10/12/10 01:41, Adam Heath wrote:

snip

Now, here it comes. The url to the site.
http://ofbizdemo.brainfood.com/.

Things to note. There are *no* database calls *at all*. It's all
done with files on disk. History browsing is backed by git, using
jgit to read it directly in java. CSS styling is rather poor. Most
unimplemented pages should do something nice(instead of a big read
'Not Yet Implemented'); at least there shouldn't be an exceptions
on those pages.


that sounded real interesting and i thought i have to have a look at
this, unfortunately all i got is:


HTTP Status 500 -




*type* Exception report

*message*

*description* _The server encountered an internal error () that
prevented it from fulfilling this request._

*exception*

java.lang.NullPointerException
WEB_45$INF.Events.System.Request.DetectUserAgent_46$jn.run(DetectUserAgent.jn:166)



Hmm, nice, thanks.

Your user-agent is:

Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-GB; rv:1.9.2.9)
Gecko/20100824 Firefox/3.6.9

The (x86_64) is what is causing the problem, I hadn't seen this type
of string in the wild. The regex doesn't like nested (). It's fixed
now.






Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-12 Thread Adrian Crum

On 10/12/2010 8:55 AM, Adam Heath wrote:

On 10/12/2010 10:25 AM, Adrian Crum wrote:

Actually, a discussion of database versus filesystem storage of content
would be worthwhile. So far there has been some hyperbole, but few facts.


How do you edit database content? What is the procedure? Can a simple
editor be used? By simple, I mean low-level, like vi.

How do you find all items in your content store that contain a certain
text word? Can grep and find be used?

How do you handle moving changes between a production server, that is
being directly managed by the client, and multiple developer
workstations, which then all have to go first to a staging server? Each
system in this case has its own set of code changes, and config+data
changes, that then have to be selectively picked for staging, before
finally being merged with production.

What about revision control? Can you go back in time to see what the
code+data looked like? Are there separate revision systems, one for the
database, and another for the content? And what about the code?

For users/systems that aren't capable of using revision control, is
there a way for them to mount/browse the content store? Think nfs/samba
here.

Storing everything directly into the filesystem lets you reuse existing
tools, that have been perfected over countless generations of man-years.


I believe Jackrabbit has WebDAV and VFS front ends that will accommodate 
file system tools. Watch the movie:


http://www.day.com/day/en/products/crx.html

-Adrian





-Adrian

On 10/12/2010 7:32 AM, Marc Morin wrote:

With all the other technologies in ofbiz, seems like webslinger just
adds more stuff onto the pile. I don't want to argue the technical
merits of database or file system persistence for a CMS, but it
appears like ofbiz would benefit from reducing the number of
technologies used, and increase the amount of re-use of technologies
it already has.

So, for me, that means entity/service/screen/presentment models are
the core technologies. Galvanizing initiatives around those appear to
provide leverage.

Now don't get me wrong, the CMS that is native in ofbiz is
incomplete and needs a lot of work... and for our use case of
providing self edited web sites and ecommerce sites, that appears a
better starting point. We have done things to add self editing etc...
but we need to put a lot more effort into that to ensure that there is
a real solution.

my $0.02.


Marc Morin
Emforium Group Inc.
ALL-IN Software
519-772-6824 ext 201
mmo...@emforium.com

- Original Message -

On 10/11/2010 10:07 PM, Nico Toerl wrote:

On 10/12/10 01:41, Adam Heath wrote:

snip

Now, here it comes. The url to the site.
http://ofbizdemo.brainfood.com/.

Things to note. There are *no* database calls *at all*. It's all
done with files on disk. History browsing is backed by git, using
jgit to read it directly in java. CSS styling is rather poor. Most
unimplemented pages should do something nice(instead of a big read
'Not Yet Implemented'); at least there shouldn't be an exceptions
on those pages.


that sounded real interesting and i thought i have to have a look at
this, unfortunately all i got is:


HTTP Status 500 -





*type* Exception report

*message*

*description* _The server encountered an internal error () that
prevented it from fulfilling this request._

*exception*

java.lang.NullPointerException
WEB_45$INF.Events.System.Request.DetectUserAgent_46$jn.run(DetectUserAgent.jn:166)




Hmm, nice, thanks.

Your user-agent is:

Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-GB; rv:1.9.2.9)
Gecko/20100824 Firefox/3.6.9

The (x86_64) is what is causing the problem, I hadn't seen this type
of string in the wild. The regex doesn't like nested (). It's fixed
now.







Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-12 Thread Adam Heath

On 10/12/2010 11:06 AM, Adrian Crum wrote:

On 10/12/2010 8:55 AM, Adam Heath wrote:

On 10/12/2010 10:25 AM, Adrian Crum wrote:

Actually, a discussion of database versus filesystem storage of content
would be worthwhile. So far there has been some hyperbole, but few
facts.


How do you edit database content? What is the procedure? Can a simple
editor be used? By simple, I mean low-level, like vi.

How do you find all items in your content store that contain a certain
text word? Can grep and find be used?

How do you handle moving changes between a production server, that is
being directly managed by the client, and multiple developer
workstations, which then all have to go first to a staging server? Each
system in this case has its own set of code changes, and config+data
changes, that then have to be selectively picked for staging, before
finally being merged with production.

What about revision control? Can you go back in time to see what the
code+data looked like? Are there separate revision systems, one for the
database, and another for the content? And what about the code?

For users/systems that aren't capable of using revision control, is
there a way for them to mount/browse the content store? Think nfs/samba
here.

Storing everything directly into the filesystem lets you reuse existing
tools, that have been perfected over countless generations of man-years.


I believe Jackrabbit has WebDAV and VFS front ends that will accommodate
file system tools. Watch the movie:

http://www.day.com/day/en/products/crx.html


Front end it wrong.  It still being the store itself is in some other 
system(database).  The raw store needs to be managed by the 
filesystem, so standard tools can move it between locations, or do 
backups, etc.  Putting yet another layer just to emulate file access 
is the wrong way.


brainstorming
Let's make a content management system.  Yeah!  Let's do it!  So, we 
need to be able to search for content, and mantain links between 
relationships.  Let's write brand new code to do that, and put it in 
the database.


Ok, next, we need to pull the information out of the database, and 
serve it thru an http server.  Oh, damn, it's not running fast.  Let's 
have a cache that resides someplace faster than the database.  Oh, I 
know, memory!  Shit, it's using too much memory.  Let's put the cache 
in the filesystem.  Updates now remove the cache, and have it get 
rebuilt.  That means read-only access is faster, but updates then have 
to rebuild tons of stuff.


Hmm.  We have a designer request to be able to use photoshop to edit 
images.  The server in question is a preview server, is hosted, and 
not on his immediate network.  Let's create a new webdav access 
method, to make the content look like a filesystem.


Our system is getting heavily loaded.  Let's have a separate database 
server, with multiple web frontends.  Cool, that works.


The system is still heavily loaded, we need a super-huge database server.

Crap, still falling over.  Time to have multiple read-only databases.
/brainstorming

or...

brainstorming
Let's store all our content into the filesystem.  That way, things 
like ExpanDrive(remote ssh fs access for windows) will work for remote 
hosted machines.  Caching isn't a problem anymore, as the raw store is 
in files.  Servers have been doing file sharing for decades, it's a 
well known problem.  Let's have someone else maintain the file sharing 
code, we'll just use it to support multiple frontends.  And, ooh, our 
designers will be able to use the tools they are familiar with to 
manipulate things.  And, we won't have the extra code running to 
maintain all the stuff in the multiple databases.  Cool, we can even 
use git, with rebase and merge, to do all sorts of fancy branching and 
push/pulling between multiple development scenarios.

/brainstorming

If the raw store was in the filesystem in the first place, then all 
this additional layering wouldn't be needed, to make the final output 
end up looking like a filesystem, which is what was being replaced all 
along.





-Adrian





-Adrian

On 10/12/2010 7:32 AM, Marc Morin wrote:

With all the other technologies in ofbiz, seems like webslinger just
adds more stuff onto the pile. I don't want to argue the technical
merits of database or file system persistence for a CMS, but it
appears like ofbiz would benefit from reducing the number of
technologies used, and increase the amount of re-use of technologies
it already has.

So, for me, that means entity/service/screen/presentment models are
the core technologies. Galvanizing initiatives around those appear to
provide leverage.

Now don't get me wrong, the CMS that is native in ofbiz is
incomplete and needs a lot of work... and for our use case of
providing self edited web sites and ecommerce sites, that appears a
better starting point. We have done things to add self editing etc...
but we need to put a lot more effort into that to ensure that there is
a real 

Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-12 Thread Adrian Crum

On 10/12/2010 9:23 AM, Adam Heath wrote:

On 10/12/2010 11:06 AM, Adrian Crum wrote:

On 10/12/2010 8:55 AM, Adam Heath wrote:

On 10/12/2010 10:25 AM, Adrian Crum wrote:

Actually, a discussion of database versus filesystem storage of content
would be worthwhile. So far there has been some hyperbole, but few
facts.


How do you edit database content? What is the procedure? Can a simple
editor be used? By simple, I mean low-level, like vi.

How do you find all items in your content store that contain a certain
text word? Can grep and find be used?

How do you handle moving changes between a production server, that is
being directly managed by the client, and multiple developer
workstations, which then all have to go first to a staging server? Each
system in this case has its own set of code changes, and config+data
changes, that then have to be selectively picked for staging, before
finally being merged with production.

What about revision control? Can you go back in time to see what the
code+data looked like? Are there separate revision systems, one for the
database, and another for the content? And what about the code?

For users/systems that aren't capable of using revision control, is
there a way for them to mount/browse the content store? Think nfs/samba
here.

Storing everything directly into the filesystem lets you reuse existing
tools, that have been perfected over countless generations of man-years.


I believe Jackrabbit has WebDAV and VFS front ends that will accommodate
file system tools. Watch the movie:

http://www.day.com/day/en/products/crx.html


Front end it wrong. It still being the store itself is in some other
system(database). The raw store needs to be managed by the filesystem,
so standard tools can move it between locations, or do backups, etc.
Putting yet another layer just to emulate file access is the wrong way.

brainstorming
Let's make a content management system. Yeah! Let's do it! So, we need
to be able to search for content, and mantain links between
relationships. Let's write brand new code to do that, and put it in the
database.

Ok, next, we need to pull the information out of the database, and serve
it thru an http server. Oh, damn, it's not running fast. Let's have a
cache that resides someplace faster than the database. Oh, I know,
memory! Shit, it's using too much memory. Let's put the cache in the
filesystem. Updates now remove the cache, and have it get rebuilt. That
means read-only access is faster, but updates then have to rebuild tons
of stuff.

Hmm. We have a designer request to be able to use photoshop to edit
images. The server in question is a preview server, is hosted, and not
on his immediate network. Let's create a new webdav access method, to
make the content look like a filesystem.

Our system is getting heavily loaded. Let's have a separate database
server, with multiple web frontends. Cool, that works.

The system is still heavily loaded, we need a super-huge database server.

Crap, still falling over. Time to have multiple read-only databases.
/brainstorming

or...

brainstorming
Let's store all our content into the filesystem. That way, things like
ExpanDrive(remote ssh fs access for windows) will work for remote hosted
machines. Caching isn't a problem anymore, as the raw store is in files.
Servers have been doing file sharing for decades, it's a well known
problem. Let's have someone else maintain the file sharing code, we'll
just use it to support multiple frontends. And, ooh, our designers will
be able to use the tools they are familiar with to manipulate things.
And, we won't have the extra code running to maintain all the stuff in
the multiple databases. Cool, we can even use git, with rebase and
merge, to do all sorts of fancy branching and push/pulling between
multiple development scenarios.
/brainstorming

If the raw store was in the filesystem in the first place, then all this
additional layering wouldn't be needed, to make the final output end up
looking like a filesystem, which is what was being replaced all along.


Okay. Will webslinger provide a JCR interface?

-Adrian


On 10/12/2010 7:32 AM, Marc Morin wrote:

With all the other technologies in ofbiz, seems like webslinger just
adds more stuff onto the pile. I don't want to argue the technical
merits of database or file system persistence for a CMS, but it
appears like ofbiz would benefit from reducing the number of
technologies used, and increase the amount of re-use of technologies
it already has.

So, for me, that means entity/service/screen/presentment models are
the core technologies. Galvanizing initiatives around those appear to
provide leverage.

Now don't get me wrong, the CMS that is native in ofbiz is
incomplete and needs a lot of work... and for our use case of
providing self edited web sites and ecommerce sites, that appears a
better starting point. We have done things to add self editing etc...
but we need to put a lot more effort into that to ensure that there is
a 

Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-12 Thread Adam Heath

On 10/12/2010 11:26 AM, Adrian Crum wrote:

On 10/12/2010 9:23 AM, Adam Heath wrote:

On 10/12/2010 11:06 AM, Adrian Crum wrote:

On 10/12/2010 8:55 AM, Adam Heath wrote:

On 10/12/2010 10:25 AM, Adrian Crum wrote:

Actually, a discussion of database versus filesystem storage of
content
would be worthwhile. So far there has been some hyperbole, but few
facts.


How do you edit database content? What is the procedure? Can a simple
editor be used? By simple, I mean low-level, like vi.

How do you find all items in your content store that contain a certain
text word? Can grep and find be used?

How do you handle moving changes between a production server, that is
being directly managed by the client, and multiple developer
workstations, which then all have to go first to a staging server? Each
system in this case has its own set of code changes, and config+data
changes, that then have to be selectively picked for staging, before
finally being merged with production.

What about revision control? Can you go back in time to see what the
code+data looked like? Are there separate revision systems, one for the
database, and another for the content? And what about the code?

For users/systems that aren't capable of using revision control, is
there a way for them to mount/browse the content store? Think nfs/samba
here.

Storing everything directly into the filesystem lets you reuse existing
tools, that have been perfected over countless generations of
man-years.


I believe Jackrabbit has WebDAV and VFS front ends that will accommodate
file system tools. Watch the movie:

http://www.day.com/day/en/products/crx.html


Front end it wrong. It still being the store itself is in some other
system(database). The raw store needs to be managed by the filesystem,
so standard tools can move it between locations, or do backups, etc.
Putting yet another layer just to emulate file access is the wrong way.

brainstorming
Let's make a content management system. Yeah! Let's do it! So, we need
to be able to search for content, and mantain links between
relationships. Let's write brand new code to do that, and put it in the
database.

Ok, next, we need to pull the information out of the database, and serve
it thru an http server. Oh, damn, it's not running fast. Let's have a
cache that resides someplace faster than the database. Oh, I know,
memory! Shit, it's using too much memory. Let's put the cache in the
filesystem. Updates now remove the cache, and have it get rebuilt. That
means read-only access is faster, but updates then have to rebuild tons
of stuff.

Hmm. We have a designer request to be able to use photoshop to edit
images. The server in question is a preview server, is hosted, and not
on his immediate network. Let's create a new webdav access method, to
make the content look like a filesystem.

Our system is getting heavily loaded. Let's have a separate database
server, with multiple web frontends. Cool, that works.

The system is still heavily loaded, we need a super-huge database server.

Crap, still falling over. Time to have multiple read-only databases.
/brainstorming

or...

brainstorming
Let's store all our content into the filesystem. That way, things like
ExpanDrive(remote ssh fs access for windows) will work for remote hosted
machines. Caching isn't a problem anymore, as the raw store is in files.
Servers have been doing file sharing for decades, it's a well known
problem. Let's have someone else maintain the file sharing code, we'll
just use it to support multiple frontends. And, ooh, our designers will
be able to use the tools they are familiar with to manipulate things.
And, we won't have the extra code running to maintain all the stuff in
the multiple databases. Cool, we can even use git, with rebase and
merge, to do all sorts of fancy branching and push/pulling between
multiple development scenarios.
/brainstorming

If the raw store was in the filesystem in the first place, then all this
additional layering wouldn't be needed, to make the final output end up
looking like a filesystem, which is what was being replaced all along.


Okay. Will webslinger provide a JCR interface?


It could.  Or maybe jackrabbit should have it's filesystem backends 
improved(or created).


However, the major problem we have with all those other systems, is 
still a big performance issue.  Synchronization sucks for load. 
Webslinger doesn't synchronize.  It makes *very* heavy use of 
concurrent programming techniques.  The problem arises when certain 
api definitions require you to call multiple methods to fetch, then 
update.  Such methods are broken, when doing non-blocking algorithms. 
 So the fix in those situations is to have a synchronized block.  But 
then you have a choke point.


*Any* time you have 2 separate methods, get(or contains), followed by 
a put(or remove), you must deal with multiple threads doing the exact 
same thing.  You can either synchronize, or alter the later methods 
with put(key, old, new) and 

Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-12 Thread Marc Morin
All of your examples are developer examples.  We are focused on end-users, so 
we don't expect them to use vi, grep, or anything like that.

- Original Message -
 On 10/12/2010 10:25 AM, Adrian Crum wrote:
  Actually, a discussion of database versus filesystem storage of
  content would be worthwhile. So far there has been some hyperbole,
  but few facts.
 
 How do you edit database content? What is the procedure? Can a
 simple editor be used? By simple, I mean low-level, like vi.

No, you run the UI editor/configuration tool.

 
 How do you find all items in your content store that contain a certain
 text word? Can grep and find be used?

can't use grep.

 
 How do you handle moving changes between a production server, that is
 being directly managed by the client, and multiple developer
 workstations, which then all have to go first to a staging server?
 Each system in this case has its own set of code changes, and
 config+data changes, that then have to be selectively picked for
 staging, before finally being merged with production.
 
 What about revision control? Can you go back in time to see what the
 code+data looked like? Are there separate revision systems, one for
 the database, and another for the content? And what about the code?

In our use case, there is no code.  Only a construction of gadgets to make up 
pages.  The code is for the gadgets.
Yes, think of Concrete 5, Joomla, etall.

 
 For users/systems that aren't capable of using revision control, is
 there a way for them to mount/browse the content store? Think
 nfs/samba here.

Nope.

 
 Storing everything directly into the filesystem lets you reuse
 existing tools, that have been perfected over countless generations of
 man-years.

If your a developer.

 
 
  -Adrian
 
  On 10/12/2010 7:32 AM, Marc Morin wrote:
  With all the other technologies in ofbiz, seems like webslinger
  just adds more stuff onto the pile. I don't want to argue the
  technical merits of database or file system persistence for a CMS,
  but it
  appears like ofbiz would benefit from reducing the number of
  technologies used, and increase the amount of re-use of
  technologies it already has.
 
  So, for me, that means entity/service/screen/presentment models are
  the core technologies. Galvanizing initiatives around those appear
  to provide leverage.
 
  Now don't get me wrong, the CMS that is native in ofbiz is
  incomplete and needs a lot of work... and for our use case of
  providing self edited web sites and ecommerce sites, that appears a
  better starting point. We have done things to add self editing
  etc... but we need to put a lot more effort into that to ensure
  that there is
  a real solution.
 
  my $0.02.
 
 
  Marc Morin
  Emforium Group Inc.
  ALL-IN Software
  519-772-6824 ext 201
  mmo...@emforium.com
 
  - Original Message -
  On 10/11/2010 10:07 PM, Nico Toerl wrote:
  On 10/12/10 01:41, Adam Heath wrote:
 
  snip
  Now, here it comes. The url to the site.
  http://ofbizdemo.brainfood.com/.
 
  Things to note. There are *no* database calls *at all*. It's all
  done with files on disk. History browsing is backed by git,
  using jgit to read it directly in java. CSS styling is rather
  poor. Most
  unimplemented pages should do something nice(instead of a big
  read 'Not Yet Implemented'); at least there shouldn't be an
  exceptions on those pages.
 
  that sounded real interesting and i thought i have to have a look
  at
  this, unfortunately all i got is:
 
 
  HTTP Status 500 -
 
  
 
 
  *type* Exception report
 
  *message*
 
  *description* _The server encountered an internal error () that
  prevented it from fulfilling this request._
 
  *exception*
 
  java.lang.NullPointerException
  WEB_45$INF.Events.System.Request.DetectUserAgent_46$jn.run(DetectUserAgent.jn:166)
 
 
  Hmm, nice, thanks.
 
  Your user-agent is:
 
  Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-GB; rv:1.9.2.9)
  Gecko/20100824 Firefox/3.6.9
 
  The (x86_64) is what is causing the problem, I hadn't seen this
  type of string in the wild. The regex doesn't like nested (). It's
  fixed now.
 


Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-12 Thread Adam Heath

On 10/12/2010 11:50 AM, Marc Morin wrote:

All of your examples are developer examples.  We are focused on end-users, so 
we don't expect them to use vi, grep, or anything like that.


That's the problem.  Don't treat your developers or users differently. 
 It means you end up writing *more* code, to support different access 
patterns.  Just write one set of code, and all modifications are done 
the same way.


Yes, we have front-end editting.  The url(ofbizdemo.brainfood.com) 
doesn't have any editting configured or installed, as I am creating 
new editting screens for it(it's a new application).


However, that editting just ends up modifying files, like you would 
normally do from the command line, and ends up calling git 
add/remove/commit, just like you'd do from the command line.


Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-12 Thread BJ Freeman
It seems that many Programmers feel it better to have the user spend 
time to learn their system, than the programmer learn their way of doing 
things to reduce the learning curve for the user.




Adam Heath sent the following on 10/12/2010 10:21 AM:

That's the problem.  Don't treat your developers or users differently.
  It means you end up writing *more* code, to support different access
patterns.  Just write one set of code, and all modifications are done
the same way.


Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-12 Thread Adam Heath

On 10/12/2010 04:31 PM, BJ Freeman wrote:

It seems that many Programmers feel it better to have the user spend
time to learn their system, than the programmer learn their way of doing
things to reduce the learning curve for the user.


Exactly.  The users of webslinger are those creating the backend 
events, or the designers writing the html fragments.  They use their 
own preferred editor.  This means those people don't have to learn a 
new way to manipulate the backend files.  This is a good thing.


Then, with the backend code and template files stored in the 
filesystem, the actual content itself is also stored in the 
filesystem.  Why have a different storage module for the content, then 
you do for the application?


Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-12 Thread Scott Gray
On 13/10/2010, at 5:23 AM, Adam Heath wrote:

 On 10/12/2010 11:06 AM, Adrian Crum wrote:
 On 10/12/2010 8:55 AM, Adam Heath wrote:
 On 10/12/2010 10:25 AM, Adrian Crum wrote:
 Actually, a discussion of database versus filesystem storage of content
 would be worthwhile. So far there has been some hyperbole, but few
 facts.
 
 How do you edit database content? What is the procedure? Can a simple
 editor be used? By simple, I mean low-level, like vi.
 
 How do you find all items in your content store that contain a certain
 text word? Can grep and find be used?
 
 How do you handle moving changes between a production server, that is
 being directly managed by the client, and multiple developer
 workstations, which then all have to go first to a staging server? Each
 system in this case has its own set of code changes, and config+data
 changes, that then have to be selectively picked for staging, before
 finally being merged with production.
 
 What about revision control? Can you go back in time to see what the
 code+data looked like? Are there separate revision systems, one for the
 database, and another for the content? And what about the code?
 
 For users/systems that aren't capable of using revision control, is
 there a way for them to mount/browse the content store? Think nfs/samba
 here.
 
 Storing everything directly into the filesystem lets you reuse existing
 tools, that have been perfected over countless generations of man-years.
 
 I believe Jackrabbit has WebDAV and VFS front ends that will accommodate
 file system tools. Watch the movie:
 
 http://www.day.com/day/en/products/crx.html
 
 Front end it wrong.  It still being the store itself is in some other 
 system(database).  The raw store needs to be managed by the filesystem, so 
 standard tools can move it between locations, or do backups, etc.  Putting 
 yet another layer just to emulate file access is the wrong way.
 
 brainstorming
 Let's make a content management system.  Yeah!  Let's do it!  So, we need to 
 be able to search for content, and mantain links between relationships.  
 Let's write brand new code to do that, and put it in the database.
 
 Ok, next, we need to pull the information out of the database, and serve it 
 thru an http server.  Oh, damn, it's not running fast.  Let's have a cache 
 that resides someplace faster than the database.  Oh, I know, memory!  Shit, 
 it's using too much memory.  Let's put the cache in the filesystem.  Updates 
 now remove the cache, and have it get rebuilt.  That means read-only access 
 is faster, but updates then have to rebuild tons of stuff.
 
 Hmm.  We have a designer request to be able to use photoshop to edit images.  
 The server in question is a preview server, is hosted, and not on his 
 immediate network.  Let's create a new webdav access method, to make the 
 content look like a filesystem.
 
 Our system is getting heavily loaded.  Let's have a separate database server, 
 with multiple web frontends.  Cool, that works.
 
 The system is still heavily loaded, we need a super-huge database server.
 
 Crap, still falling over.  Time to have multiple read-only databases.
 /brainstorming
 
 or...
 
 brainstorming
 Let's store all our content into the filesystem.  That way, things like 
 ExpanDrive(remote ssh fs access for windows) will work for remote hosted 
 machines.  Caching isn't a problem anymore, as the raw store is in files.  
 Servers have been doing file sharing for decades, it's a well known problem.  
 Let's have someone else maintain the file sharing code, we'll just use it to 
 support multiple frontends.  And, ooh, our designers will be able to use the 
 tools they are familiar with to manipulate things.  And, we won't have the 
 extra code running to maintain all the stuff in the multiple databases.  
 Cool, we can even use git, with rebase and merge, to do all sorts of fancy 
 branching and push/pulling between multiple development scenarios.
 /brainstorming
 
 If the raw store was in the filesystem in the first place, then all this 
 additional layering wouldn't be needed, to make the final output end up 
 looking like a filesystem, which is what was being replaced all along.

To be honest it makes it a little difficult to take you seriously when you 
completely disregard the JCR/Jackrabbit approach without even the slightest 
hint of objectivity
if (!myWay) {
return highway;
}
The JCR was produced by an expert working group driven largely by Day Software 
which has Roy T. Fielding as their chief scientist.  While I know next to 
nothing about what constitutes a great CMS infrastructure I cannot simply 
accept that you are right and they are wrong especially when you make no 
attempt whatsoever to paint the full picture, I mean are you suggesting that a 
file system based CMS has no downsides?  Your approach is filled with pros and 
their's all cons?

Regards
Scott

smime.p7s
Description: S/MIME cryptographic signature


Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-12 Thread Ean Schuessler
We think its interesting and handy to manage our web content using GIT.
Its hard to do that with JackRabbit, especially in its preferred
configuration of a database backed store. I think that is a pretty
reasoned explanation. I don't see Adam or I casting stones at your CMS
test application so please consider lightening up. Thanks. :-D

Scott Gray wrote:
 To be honest it makes it a little difficult to take you seriously when you 
 completely disregard the JCR/Jackrabbit approach without even the slightest 
 hint of objectivity
 if (!myWay) {
 return highway;
 }
 The JCR was produced by an expert working group driven largely by Day 
 Software which has Roy T. Fielding as their chief scientist.  While I know 
 next to nothing about what constitutes a great CMS infrastructure I cannot 
 simply accept that you are right and they are wrong especially when you make 
 no attempt whatsoever to paint the full picture, I mean are you suggesting 
 that a file system based CMS has no downsides?  Your approach is filled with 
 pros and their's all cons?
   
-- 
Ean Schuessler, CTO
e...@brainfood.com
214-720-0700 x 315
Brainfood, Inc.
http://www.brainfood.com



Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-12 Thread Scott Gray
This isn't about casting stones or attempting to belittle webslinger, which I 
have no doubt is a fantastic piece of work and meets its stated goals 
brilliantly.  This is about debating why it should be included in OFBiz as a 
tightly integrated CMS and how well webslinger's goals match up with OFBiz's 
content requirements (whatever they are, I don't pretend to know).  Webslinger 
was included in the framework with little to no discussion and I'm trying to 
take the opportunity to have that discussion now.

I'm not trying to add FUD to the possibility of webslinger taking a more active 
role in OFBiz, I'm just trying to understand what is being proposed and what 
the project stands to gain or lose by accepting that proposal.

Version control with git and the ability to edit content with vi is great but 
what are we giving up in exchange for that?  Surely there must be something 
lacking in a file system approach if the extremely vast majority of CMS vendors 
have shunned it in favor of a database (or database + file system) approach?  I 
just cannot accept that all of these vendors simply said durp durp RDMBS! durp 
durp.  What about non-hierarchical node linking? Content meta-data? 
Transaction management? Referential integrity? Node types?

Regards
Scott

On 13/10/2010, at 11:01 AM, Ean Schuessler wrote:

 We think its interesting and handy to manage our web content using GIT.
 Its hard to do that with JackRabbit, especially in its preferred
 configuration of a database backed store. I think that is a pretty
 reasoned explanation. I don't see Adam or I casting stones at your CMS
 test application so please consider lightening up. Thanks. :-D
 
 Scott Gray wrote:
 To be honest it makes it a little difficult to take you seriously when you 
 completely disregard the JCR/Jackrabbit approach without even the slightest 
 hint of objectivity
 if (!myWay) {
return highway;
 }
 The JCR was produced by an expert working group driven largely by Day 
 Software which has Roy T. Fielding as their chief scientist.  While I know 
 next to nothing about what constitutes a great CMS infrastructure I cannot 
 simply accept that you are right and they are wrong especially when you make 
 no attempt whatsoever to paint the full picture, I mean are you suggesting 
 that a file system based CMS has no downsides?  Your approach is filled with 
 pros and their's all cons?
 
 -- 
 Ean Schuessler, CTO
 e...@brainfood.com
 214-720-0700 x 315
 Brainfood, Inc.
 http://www.brainfood.com
 



smime.p7s
Description: S/MIME cryptographic signature


Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-12 Thread Adrian Crum

On 10/12/2010 3:39 PM, Scott Gray wrote:

This is about debating why it should be included in OFBiz as a tightly 
integrated CMS and how well webslinger's goals match up with OFBiz's content 
requirements (whatever they are, I don't pretend to know).


I thought one of the goals was to replace OFBiz's content repository 
with something off-the-shelf. The idea behind using JCR was to avoid 
being locked into a specific product. In other words, if OFBiz talks to 
JCR, then OFBiz can use any JCR-compliant repository.


That's why I asked Adam if there would be a JCR interface for 
webslinger. Webslinger could be one of many JCR-compliant repositories 
to choose from.


I believe another thing that comes into play in this discussion is how 
people are picturing a CMS being used in OFBiz. I get the impression 
Adam pictures it being used for website authoring. On the other hand, I 
picture OFBiz retrieving documents from existing corporate repositories 
to be served up in web pages. So, an OFBiz CMS might mean different 
things to different people, and each person's requirement might drive 
the decision to use Webslinger or something else.


-Adrian


Webslinger was included in the framework with little to no discussion and I'm 
trying to take the opportunity to have that discussion now.

I'm not trying to add FUD to the possibility of webslinger taking a more active 
role in OFBiz, I'm just trying to understand what is being proposed and what 
the project stands to gain or lose by accepting that proposal.

Version control with git and the ability to edit content with vi is great but what are we 
giving up in exchange for that?  Surely there must be something lacking in a file system 
approach if the extremely vast majority of CMS vendors have shunned it in favor of a 
database (or database + file system) approach?  I just cannot accept that all of these 
vendors simply said durp durp RDMBS! durp durp.  What about non-hierarchical 
node linking? Content meta-data? Transaction management? Referential integrity? Node 
types?

Regards
Scott

On 13/10/2010, at 11:01 AM, Ean Schuessler wrote:


We think its interesting and handy to manage our web content using GIT.
Its hard to do that with JackRabbit, especially in its preferred
configuration of a database backed store. I think that is a pretty
reasoned explanation. I don't see Adam or I casting stones at your CMS
test application so please consider lightening up. Thanks. :-D

Scott Gray wrote:

To be honest it makes it a little difficult to take you seriously when you 
completely disregard the JCR/Jackrabbit approach without even the slightest 
hint of objectivity
if (!myWay) {
return highway;
}
The JCR was produced by an expert working group driven largely by Day Software 
which has Roy T. Fielding as their chief scientist.  While I know next to 
nothing about what constitutes a great CMS infrastructure I cannot simply 
accept that you are right and they are wrong especially when you make no 
attempt whatsoever to paint the full picture, I mean are you suggesting that a 
file system based CMS has no downsides?  Your approach is filled with pros and 
their's all cons?


--
Ean Schuessler, CTO
e...@brainfood.com
214-720-0700 x 315
Brainfood, Inc.
http://www.brainfood.com





Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-12 Thread Raj Saini


Then, with the backend code and template files stored in the 
filesystem, the actual content itself is also stored in the 
filesystem.  Why have a different storage module for the content, then 
you do for the application?


I don't think it is a code idea to store your code and data together. 
Data is some thing which you need to take regular backup and your code 
is generally in binary form and reproducible easily such as deploying a 
war or jar file.





Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-12 Thread Raj Saini



To be honest it makes it a little difficult to take you seriously when you 
completely disregard the JCR/Jackrabbit approach without even the slightest 
hint of objectivity
if (!myWay) {
 return highway;
}
The JCR was produced by an expert working group driven largely by Day Software 
which has Roy T. Fielding as their chief scientist.  While I know next to 
nothing about what constitutes a great CMS infrastructure I cannot simply 
accept that you are right and they are wrong especially when you make no 
attempt whatsoever to paint the full picture, I mean are you suggesting that a 
file system based CMS has no downsides?  Your approach is filled with pros and 
their's all cons?
   
Subversion is good example of using a database to store the contents 
(source). Subversion does not use flat files to store the files. It does 
use a BDB or FSFS. Although FSFS is a single file filesystem, it is not 
a plain file to be manipulated directly. Generally applications using 
filessystem files add their own header information.

Regards
Scott




Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-11 Thread Adam Heath

On 10/11/2010 12:41 PM, Adam Heath wrote:

So, about a month ago, people on this list wanted to see an example
website implemented in webslinger. At the time, I had a preliminary
version of specialpurpose/ofbizwebsite converted. I mentioned that I
would finish that up, and make it available.

However, that component in svn only has 2 real pages; not enough for a
demonstration. So I went looking for more content; I found it on
cwiki.apache.org.

The next question became how to get that data into something more
useable. Well, here is where the fun begins. :)

I have an importer that uses confluence's rpc, to fetch the following
items:

* pages, and all their versions
* labels(no versions)
* parent/child relationships(no versions)
* attachments, and all their versions
* comments(no versions stored)
* blogs, with versions
* comments(no versions stored)
* users
* followers/following
* profilepics

All this information in then used to create 2 brand new git
repositories; one to store user data(which is supposed to be shared
across all spaces), and another to store the OFBIZ space data.

The importer is smart enough to run multiple times, and only add what
has changed.

For items that don't have versions already, there are bulk commits that
occur. Over time, as the importer is continually run, history will build
up. When the switch occurs, the new system will store history for every
change.

Pages are no longer stored by ID. They are stored by name. Renames are
handled during import as well(requires updating all parent/child
relationships, all referenced labels, etc).

Now, on to the read-only side of all this.

* home page link(/) is updated automatically.
* attachment icon for pages with attachments is shown
* Links to added by, last edited by are shown per page.
* Raw wiki markup is imported, and I use mylyn to convert it to html.
This is not perfect.
* Page tree heiarchy; the selected node is auto-opened, but javascript
is not used to expand/collapse.
* attachments on pages(tools menu)
* Viewing the history for a page(parses git data using jgit).
* page tree list
* alphabetical page list(a-z bins), with pagination
* recently updated pages
* blog summary
* all labels
* popular labels
* all attachments, with pagination
* User profile(individual history, and metadata)
* User network

Things not implemented:

* viewing an actual change/diff
* comparing versions
* page summary
* permissions(can't fetch the data, don't have enough permissions on the
doogie user in confluence)
* blog detail, no comments either
* label detail(showing which pages have a label attached)
* attachment version display
* global user directory
* personal profile(no user login capability yet)
* no user profile actions(on the left side)
* default profilepic set not imported(license issues, so they are all 404)
* user status(no rpc, hardly anyone in ofbiz actually filed out status
updates).
* dashboard
* confluence macros in text blobs are not handled(mylyn doesn't support
them)

The timeframe it took me to write this: one week of initial importer
development, 3.5 weeks of continuing importer development, and frontend
development. All of this is completely from scratch, no previous
application code existed. I've been working completely in my spare
time(weekends too). A single person.

Now, here it comes. The url to the site. http://ofbizdemo.brainfood.com/.


Some interesting urls to hit:

1: http://ofbizdemo.brainfood.com/person/jacques.le.roux/network
2: http://ofbizdemo.brainfood.com/person/bjfreeman/network
3: 
http://ofbizdemo.brainfood.com/page/Apache%20OFBiz%20Service%20Providers/history

4: http://ofbizdemo.brainfood.com/pages/recent

Keep in mind that the software hasn't been heavily optimized.  Some 
things could definately be cached more.  And, due to various 
circumstances, the loaded site data gets garbage collected at 
times(I've made certain that there are no static globals to keep 
things around), so sometimes it has to re-load the site configuration, 
and compile a bunch of stuff on the fly.  That's solvable by giving 
more memory to the instance.




Things to note. There are *no* database calls *at all*. It's all done
with files on disk. History browsing is backed by git, using jgit to
read it directly in java. CSS styling is rather poor. Most unimplemented
pages should do something nice(instead of a big read 'Not Yet
Implemented'); at least there shouldn't be an exceptions on those pages.




Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-11 Thread Jacques Le Roux

Impressive, now I know what Webslinger is and what it is capable of!

Thanks

Jacques

From: Adam Heath doo...@brainfood.com

On 10/11/2010 12:41 PM, Adam Heath wrote:

So, about a month ago, people on this list wanted to see an example
website implemented in webslinger. At the time, I had a preliminary
version of specialpurpose/ofbizwebsite converted. I mentioned that I
would finish that up, and make it available.

However, that component in svn only has 2 real pages; not enough for a
demonstration. So I went looking for more content; I found it on
cwiki.apache.org.

The next question became how to get that data into something more
useable. Well, here is where the fun begins. :)

I have an importer that uses confluence's rpc, to fetch the following
items:

* pages, and all their versions
* labels(no versions)
* parent/child relationships(no versions)
* attachments, and all their versions
* comments(no versions stored)
* blogs, with versions
* comments(no versions stored)
* users
* followers/following
* profilepics

All this information in then used to create 2 brand new git
repositories; one to store user data(which is supposed to be shared
across all spaces), and another to store the OFBIZ space data.

The importer is smart enough to run multiple times, and only add what
has changed.

For items that don't have versions already, there are bulk commits that
occur. Over time, as the importer is continually run, history will build
up. When the switch occurs, the new system will store history for every
change.

Pages are no longer stored by ID. They are stored by name. Renames are
handled during import as well(requires updating all parent/child
relationships, all referenced labels, etc).

Now, on to the read-only side of all this.

* home page link(/) is updated automatically.
* attachment icon for pages with attachments is shown
* Links to added by, last edited by are shown per page.
* Raw wiki markup is imported, and I use mylyn to convert it to html.
This is not perfect.
* Page tree heiarchy; the selected node is auto-opened, but javascript
is not used to expand/collapse.
* attachments on pages(tools menu)
* Viewing the history for a page(parses git data using jgit).
* page tree list
* alphabetical page list(a-z bins), with pagination
* recently updated pages
* blog summary
* all labels
* popular labels
* all attachments, with pagination
* User profile(individual history, and metadata)
* User network

Things not implemented:

* viewing an actual change/diff
* comparing versions
* page summary
* permissions(can't fetch the data, don't have enough permissions on the
doogie user in confluence)
* blog detail, no comments either
* label detail(showing which pages have a label attached)
* attachment version display
* global user directory
* personal profile(no user login capability yet)
* no user profile actions(on the left side)
* default profilepic set not imported(license issues, so they are all 404)
* user status(no rpc, hardly anyone in ofbiz actually filed out status
updates).
* dashboard
* confluence macros in text blobs are not handled(mylyn doesn't support
them)

The timeframe it took me to write this: one week of initial importer
development, 3.5 weeks of continuing importer development, and frontend
development. All of this is completely from scratch, no previous
application code existed. I've been working completely in my spare
time(weekends too). A single person.

Now, here it comes. The url to the site. http://ofbizdemo.brainfood.com/.


Some interesting urls to hit:

1: http://ofbizdemo.brainfood.com/person/jacques.le.roux/network
2: http://ofbizdemo.brainfood.com/person/bjfreeman/network
3: 
http://ofbizdemo.brainfood.com/page/Apache%20OFBiz%20Service%20Providers/history

4: http://ofbizdemo.brainfood.com/pages/recent

Keep in mind that the software hasn't been heavily optimized.  Some 
things could definately be cached more.  And, due to various 
circumstances, the loaded site data gets garbage collected at 
times(I've made certain that there are no static globals to keep 
things around), so sometimes it has to re-load the site configuration, 
and compile a bunch of stuff on the fly.  That's solvable by giving 
more memory to the instance.




Things to note. There are *no* database calls *at all*. It's all done
with files on disk. History browsing is backed by git, using jgit to
read it directly in java. CSS styling is rather poor. Most unimplemented
pages should do something nice(instead of a big read 'Not Yet
Implemented'); at least there shouldn't be an exceptions on those pages.






Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-11 Thread Adam Heath

On 10/11/2010 02:37 PM, Jacques Le Roux wrote:

Impressive, now I know what Webslinger is and what it is capable of!


Actually, this is just one application.  Webslinger(-core) is an 
enabling technology, that enables anything to be written quickly.  As 
I said, I've only spent probably 2 actual weeks on the application itself.


Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-11 Thread Scott Gray
On 12/10/2010, at 10:03 AM, Adam Heath wrote:

 On 10/11/2010 02:37 PM, Jacques Le Roux wrote:
 Impressive, now I know what Webslinger is and what it is capable of!
 
 Actually, this is just one application.  Webslinger(-core) is an enabling 
 technology, that enables anything to be written quickly.  As I said, I've 
 only spent probably 2 actual weeks on the application itself.

The main question in my mind is what does all this mean for OFBiz?  Obviously 
because webslinger is currently in the framework you envisage it playing some 
sort of role in the ERP applications, but what exactly?

I think I understand better now why Ean and yourself were somewhat negative 
towards the possibility of a jackrabbit integration, do you see this as some 
sort of alternative?

Regards
Scott

smime.p7s
Description: S/MIME cryptographic signature


Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-11 Thread Adam Heath

On 10/11/2010 04:25 PM, Scott Gray wrote:

On 12/10/2010, at 10:03 AM, Adam Heath wrote:


On 10/11/2010 02:37 PM, Jacques Le Roux wrote:

Impressive, now I know what Webslinger is and what it is capable of!


Actually, this is just one application.  Webslinger(-core) is an enabling 
technology, that enables anything to be written quickly.  As I said, I've only 
spent probably 2 actual weeks on the application itself.


The main question in my mind is what does all this mean for OFBiz?  Obviously 
because webslinger is currently in the framework you envisage it playing some 
sort of role in the ERP applications, but what exactly?


It means that webslinger could run all of cwiki.apache.org, being 
fully java dynamic.  The front page is currently giving me 250req/s 
with single concurrency, and 750req/s with a concurrency of 5.  And, 
ofbiz would be running along side, so that we could do other things as 
well.



I think I understand better now why Ean and yourself were somewhat

 negative towards the possibility of a jackrabbit integration, do
 you see this as some sort of alternative?

Storing content in the database is wrong.  How do you use normal 
editors(vim/emacs/dreamweaver/eclipse/photoshop) to manipulate files? 
 How do you run find/grep?  What revision control do you 
use(git/svn/whatever)?  The webslinger mantra is to reuse existing 
toolsets as much as possible.  That means using the filesystem, which 
then gives you nfs/samba access for sharing, etc.


The filesystem api we use is commons-vfs; we don't actually use 
commons-vfs itself, most of the implementation and filesystems have 
been rewritten to actually be non-blocking and performant and not have 
thread leaks or memory leaks or dead-locks.  We don't use bsf(too much 
reflection, too much synchronization).


Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-11 Thread Ean Schuessler
Scott Gray wrote:
 The main question in my mind is what does all this mean for OFBiz?
 Obviously because webslinger is currently in the framework you
 envisage it playing some sort of role in the ERP applications, but
 what exactly?
We see knowledge sharing as an important ERP function.
 I think I understand better now why Ean and yourself were somewhat negative 
 towards the possibility of a jackrabbit integration, do you see this as some 
 sort of alternative?
   
Some sort of alternative, though I would see Jackrabbit more as an
alternative to our modded CommonsVFS+Lucene. I'm mostly antagonistic to
a database oriented content management approach because I don't feel
like any of the tools out there (including Jackrabbit) realistically
deal with the situation of having a long-term development project
running in tandem with a live server. All of the database driven tools
(Wordpress, Drupal, Joomla, Plone, Alfresco, LifeRay) fail to deliver a
solution for distributed revision control. For me, that seems like a
critical weakness because I've been through more than a few overhauls of
a large corporate information management infrastructure. Work goes on in
parallel both in the live server and the development environment. If you
don't have tools to manage the process of merging those streams of
information then you are in for a tough time.

Jackrabbit is very interesting, mostly because it extends the filesystem
concept to blend more seamlessly with what the web seems to want its
filesystem to look like. I think it would be fully possible for us to
replace CommonsVFS with Jackrabbit but I'm not entirely clear that it is
worth it. Any CMS that cannot present itself as a vanilla filesystem is
fundamentally hampered by the unfortunate reality is that most programs
expect to work with that model. I suppose it depends on where you want
to be inconvenienced.

-- 
Ean Schuessler, CTO
e...@brainfood.com
214-720-0700 x 315
Brainfood, Inc.
http://www.brainfood.com



Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-11 Thread Scott Gray
On 12/10/2010, at 11:45 AM, Adam Heath wrote:

 On 10/11/2010 04:25 PM, Scott Gray wrote:
 On 12/10/2010, at 10:03 AM, Adam Heath wrote:
 
 On 10/11/2010 02:37 PM, Jacques Le Roux wrote:
 Impressive, now I know what Webslinger is and what it is capable of!
 
 Actually, this is just one application.  Webslinger(-core) is an enabling 
 technology, that enables anything to be written quickly.  As I said, I've 
 only spent probably 2 actual weeks on the application itself.
 
 The main question in my mind is what does all this mean for OFBiz?  
 Obviously because webslinger is currently in the framework you envisage it 
 playing some sort of role in the ERP applications, but what exactly?
 
 It means that webslinger could run all of cwiki.apache.org, being fully java 
 dynamic.  The front page is currently giving me 250req/s with single 
 concurrency, and 750req/s with a concurrency of 5.  And, ofbiz would be 
 running along side, so that we could do other things as well.

That wasn't what I was asking but since you mention it, what does that actually 
mean for us?  Part of reason we moved to the ASF was so that we could rely on 
their infrastructure instead of maintaining our own.  Assuming we replaced 
confluence with webslinger then what do we do if you disappear from the scene 
in a year's time?  The idea of learning a new obscure tool doesn't sound very 
appealing.

 I think I understand better now why Ean and yourself were somewhat
  negative towards the possibility of a jackrabbit integration, do
  you see this as some sort of alternative?
 
 Storing content in the database is wrong.  How do you use normal 
 editors(vim/emacs/dreamweaver/eclipse/photoshop) to manipulate files?  How do 
 you run find/grep?  What revision control do you use(git/svn/whatever)?  The 
 webslinger mantra is to reuse existing toolsets as much as possible.  That 
 means using the filesystem, which then gives you nfs/samba access for 
 sharing, etc.
 
 The filesystem api we use is commons-vfs; we don't actually use commons-vfs 
 itself, most of the implementation and filesystems have been rewritten to 
 actually be non-blocking and performant and not have thread leaks or memory 
 leaks or dead-locks.  We don't use bsf(too much reflection, too much 
 synchronization).

Alternative, got it.

smime.p7s
Description: S/MIME cryptographic signature


Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-11 Thread Adam Heath

On 10/11/2010 06:26 PM, Scott Gray wrote:

On 12/10/2010, at 11:45 AM, Adam Heath wrote:


On 10/11/2010 04:25 PM, Scott Gray wrote:

On 12/10/2010, at 10:03 AM, Adam Heath wrote:


On 10/11/2010 02:37 PM, Jacques Le Roux wrote:

Impressive, now I know what Webslinger is and what it is capable of!


Actually, this is just one application.  Webslinger(-core) is an enabling 
technology, that enables anything to be written quickly.  As I said, I've only 
spent probably 2 actual weeks on the application itself.


The main question in my mind is what does all this mean for OFBiz?  Obviously 
because webslinger is currently in the framework you envisage it playing some 
sort of role in the ERP applications, but what exactly?


It means that webslinger could run all of cwiki.apache.org, being fully java 
dynamic.  The front page is currently giving me 250req/s with single 
concurrency, and 750req/s with a concurrency of 5.  And, ofbiz would be running 
along side, so that we could do other things as well.


That wasn't what I was asking but since you mention it, what does

 that actually mean for us?  Part of reason we moved to the ASF was
 so that we could rely on their infrastructure instead of maintaining
 our own.  Assuming we replaced confluence with webslinger then what
 do we do if you disappear from the scene in a year's time?  The idea
 of learning a new obscure tool doesn't sound very appealing.

Who said that this was going to stay a brainfood-only project?  We 
have every intention of making webslinger(-core) a public, community 
project.  There isn't anything really like this.


* Nested servlet container(minor point).
* Filesystem overlay(think unionfs).
* Many servlet-like configuration points can be configured dynamically 
at runtime thru the filesystem.


Again, since all this stuff is in the filesystem, git/svn work on all 
aspects.  Merging between previous, development, workstation, and 
production is quite simple to do.


Because of the overlay capability, it's also easy to upgrade a base 
code module, with a light-weight file layout, and have the content 
site transparently sit on top, with a unified view of everything.



I think I understand better now why Ean and yourself were somewhat
negative towards the possibility of a jackrabbit integration, do
you see this as some sort of alternative?


Storing content in the database is wrong.  How do you use normal 
editors(vim/emacs/dreamweaver/eclipse/photoshop) to manipulate files?  How do 
you run find/grep?  What revision control do you use(git/svn/whatever)?  The 
webslinger mantra is to reuse existing toolsets as much as possible.  That 
means using the filesystem, which then gives you nfs/samba access for sharing, 
etc.

The filesystem api we use is commons-vfs; we don't actually use commons-vfs 
itself, most of the implementation and filesystems have been rewritten to 
actually be non-blocking and performant and not have thread leaks or memory 
leaks or dead-locks.  We don't use bsf(too much reflection, too much 
synchronization).


Alternative, got it.




Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-11 Thread Scott Gray
On 12/10/2010, at 12:37 PM, Adam Heath wrote:

 On 10/11/2010 06:26 PM, Scott Gray wrote:
 On 12/10/2010, at 11:45 AM, Adam Heath wrote:
 
 On 10/11/2010 04:25 PM, Scott Gray wrote:
 On 12/10/2010, at 10:03 AM, Adam Heath wrote:
 
 On 10/11/2010 02:37 PM, Jacques Le Roux wrote:
 Impressive, now I know what Webslinger is and what it is capable of!
 
 Actually, this is just one application.  Webslinger(-core) is an enabling 
 technology, that enables anything to be written quickly.  As I said, I've 
 only spent probably 2 actual weeks on the application itself.
 
 The main question in my mind is what does all this mean for OFBiz?  
 Obviously because webslinger is currently in the framework you envisage it 
 playing some sort of role in the ERP applications, but what exactly?
 
 It means that webslinger could run all of cwiki.apache.org, being fully 
 java dynamic.  The front page is currently giving me 250req/s with single 
 concurrency, and 750req/s with a concurrency of 5.  And, ofbiz would be 
 running along side, so that we could do other things as well.
 
 That wasn't what I was asking but since you mention it, what does
  that actually mean for us?  Part of reason we moved to the ASF was
  so that we could rely on their infrastructure instead of maintaining
  our own.  Assuming we replaced confluence with webslinger then what
  do we do if you disappear from the scene in a year's time?  The idea
  of learning a new obscure tool doesn't sound very appealing.
 
 Who said that this was going to stay a brainfood-only project?  

No one and I didn't make that assumption.

 We have every intention of making webslinger(-core) a public, community 
 project.  There isn't anything really like this.

I'm sure every dead open source project had the intention of building a 
thriving community but it doesn't always work out that way.  What I am asking 
is what will the OFBiz documentation gain by being hosted on webslinger(-core?) 
that makes it worth the risk of the project being abandoned and us having to 
move it all back to confluence or whatever the ASF is using by then?

And what is (-core)?  Does that imply that there is a webslinger(-pro) edition 
that OFBiz users can take advantage of by contracting with or licensing from 
brainfood?  I don't think a little skepticism  is out of order when you tell us 
how wonderful it would be for OFBiz to include webslinger if your company 
stands to benefit from its inclusion.  I'm not even saying that's a bad thing, 
I just prefer to have the full picture.

 * Nested servlet container(minor point).
 * Filesystem overlay(think unionfs).
 * Many servlet-like configuration points can be configured dynamically at 
 runtime thru the filesystem.
 
 Again, since all this stuff is in the filesystem, git/svn work on all 
 aspects.  Merging between previous, development, workstation, and production 
 is quite simple to do.
 
 Because of the overlay capability, it's also easy to upgrade a base code 
 module, with a light-weight file layout, and have the content site 
 transparently sit on top, with a unified view of everything.
 
 I think I understand better now why Ean and yourself were somewhat
 negative towards the possibility of a jackrabbit integration, do
 you see this as some sort of alternative?
 
 Storing content in the database is wrong.  How do you use normal 
 editors(vim/emacs/dreamweaver/eclipse/photoshop) to manipulate files?  How 
 do you run find/grep?  What revision control do you use(git/svn/whatever)?  
 The webslinger mantra is to reuse existing toolsets as much as possible.  
 That means using the filesystem, which then gives you nfs/samba access for 
 sharing, etc.
 
 The filesystem api we use is commons-vfs; we don't actually use commons-vfs 
 itself, most of the implementation and filesystems have been rewritten to 
 actually be non-blocking and performant and not have thread leaks or memory 
 leaks or dead-locks.  We don't use bsf(too much reflection, too much 
 synchronization).
 
 Alternative, got it.
 



smime.p7s
Description: S/MIME cryptographic signature


Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-11 Thread Nico Toerl
On 10/12/10 01:41, Adam Heath wrote:

snip
 Now, here it comes.  The url to the site.
 http://ofbizdemo.brainfood.com/.

 Things to note.  There are *no* database calls *at all*.  It's all
 done with files on disk.  History browsing is backed by git, using
 jgit to read it directly in java.  CSS styling is rather poor.  Most
 unimplemented pages should do something nice(instead of a big read
 'Not Yet Implemented'); at least there shouldn't be an exceptions on
 those pages.

that sounded real interesting and i thought i have to have a look at
this, unfortunately all i got is:


  HTTP Status 500 -



*type* Exception report

*message*

*description* _The server encountered an internal error () that
prevented it from fulfilling this request._

*exception*

java.lang.NullPointerException

WEB_45$INF.Events.System.Request.DetectUserAgent_46$jn.run(DetectUserAgent.jn:166)
org.webslinger.ext.code.CodeType.run(CodeType.java:74)

org.webslinger.WebslingerPlanner.invokeContent(WebslingerPlanner.java:531)
org.webslinger.WebslingerPlanner.runDirect(WebslingerPlanner.java:286)

org.webslinger.WebslingerServletContext.runDirectNoThrow(WebslingerServletContext.java:1406)

org.webslinger.WebslingerServletContext$RequestEventsRequestListener.runEvent(WebslingerServletContext.java:1068)

org.webslinger.WebslingerServletContext$RequestEventsRequestListener.requestInitialized(WebslingerServletContext.java:1076)

org.webslinger.WebslingerServletContext$RequestFilterChain.doFilter(WebslingerServletContext.java:861)

org.webslinger.WebslingerServletContext.service(WebslingerServletContext.java:429)

org.webslinger.WebslingerServletContext.service(WebslingerServletContext.java:292)

org.webslinger.servlet.WebslingerServlet.service(WebslingerServlet.java:52)
javax.servlet.http.HttpServlet.service(HttpServlet.java:717)

*note* _The full stack trace of the root cause is available in the
Apache Tomcat/6.0.29 logs._


cheers
Nico

-- 
Nico Toerl  
SeniorSysadmin / IT-department
Virtual Village

Tel. +86 21 51718885 ext.7042




Re: Woop! Confluence data imported into git and displayed with webslinger!

2010-10-11 Thread Adam Heath

On 10/11/2010 10:07 PM, Nico Toerl wrote:

On 10/12/10 01:41, Adam Heath wrote:

snip

Now, here it comes.  The url to the site.
http://ofbizdemo.brainfood.com/.

Things to note.  There are *no* database calls *at all*.  It's all
done with files on disk.  History browsing is backed by git, using
jgit to read it directly in java.  CSS styling is rather poor.  Most
unimplemented pages should do something nice(instead of a big read
'Not Yet Implemented'); at least there shouldn't be an exceptions on
those pages.


that sounded real interesting and i thought i have to have a look at
this, unfortunately all i got is:


   HTTP Status 500 -



*type* Exception report

*message*

*description* _The server encountered an internal error () that
prevented it from fulfilling this request._

*exception*

java.lang.NullPointerException

WEB_45$INF.Events.System.Request.DetectUserAgent_46$jn.run(DetectUserAgent.jn:166)


Hmm, nice, thanks.

Your user-agent is:

Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-GB; rv:1.9.2.9)
 Gecko/20100824 Firefox/3.6.9

The (x86_64) is what is causing the problem, I hadn't seen this type 
of string in the wild.  The regex doesn't like nested ().  It's fixed now.