> Which ' local file system'  references are you talking about?

Any time the code refers to a file on disk for configuration or operational 
information instead of the database. My goal was to have the entire OTRS system 
code (with the exception of the data it operates on) to be read-only. I operate 
environments where truly read-only code can be efficiently physically shared, 
and I wanted to take advantage of that. 

> There are just a few places where OTRS uses local file systems (session
> storage, attachments, VirtualFS which is used by Change
> Management) and all these also have options to use a database backend.
> Apart from that, there is the loader cache for which there is the '--generate'
> option now.
> There is also the SysConfig which is stored in a file on disk.

Yes. Recent versions of the OTRS code are much, much better about not storing 
node-specific code elements in external disk files (thank you!), which is why I 
said my code would need updating. You weren't as careful about that in earlier 
releases.
 
>  Did you store the minified JS and CSS files in the database? And the
> SysConfig values? that's nice but it might be a little bit over-engineered. 

OK. If it's not useful, then no harm done. I think at least the parts that move 
sysconfig data into the database would be useful (one of the most common 
problems I see here is caused by not regenerating that cache file), but no big 
deal. 

> Also,
> the vast majority of OTRS users will not need clustered setups, and will use
> vertical scaling  - bigger machines - or maybe break out the database server
> to a separate machine, and avoid all headaches that come with multiple
> master nodes.

Here I would disagree. As people virtualize more and more of their 
infrastructure, there is significant hard data that small numbers of bigger 
machines perform much more poorly than multiple smaller machines and are much 
harder to tune for performance. It takes a lot more work for a hypervisor to 
manage a few big machines than multiple smaller machines (especially on Intel 
architectures which are not optimized for this kind of thing). Consider page 
table mapping, swap infrastructure for large VMs and balancing in-memory vs 
file caching for big machines when you need to bring substantial numbers of 
pages in and out before a machine can be dispatched efficiently. It really does 
work out better in terms of system overhead in a shared resource environment to 
have a horizontally scalable design than a vertically scaled one. You also gain 
a measure of HA design that allows concurrent maintenance if done properly 
(another common problem seen here is not having a way to do rolling up
 grades to avoid outages, which really needs to be present for enterprise-grade 
services). 

Definitely break out the database server in any case, but I'd argue much more 
strongly for well-behaved cluster performance than just throwing hardware at 
the problem. Horizontal and vertical scaling aren't mutually exclusive if you 
design for them properly. 

_______________________________________________
OTRS mailing list: dev - Webpage: http://otrs.org/
Archive: http://lists.otrs.org/pipermail/dev
To unsubscribe: http://lists.otrs.org/cgi-bin/listinfo/dev

Reply via email to