Re: [gentoo-user] Re: {OT} Allow work from home?

2016-03-04 Thread lee
Kai Krakow  writes:

> Am Sat, 20 Feb 2016 10:48:57 +0100
> schrieb lee :
>
>> Kai Krakow  writes:
>> 
>> > Am Fri, 22 Jan 2016 00:52:30 +0100
>> > schrieb lee :
>> >
>> >> Is WSUS of any use without domains?  If it is, I should take a
>> >> look at it.
>> >
>> > You can use it with and without domains. What domains give you
>> > through GPO is just automatic deployment of the needed registry
>> > settings in the client.
>> >
>> > You can simply create a proper .reg file and deploy it to the
>> > clients however you like. They will connect to WSUS and receive
>> > updates you control.
>> >
>> > No magic here.
>> 
>> Sounds good :)  Does it also solve the problem of having to make
>> settings for all users, like when setting up a MUA or Libreoffice?
>> 
>> That means settings on the same machine for all users, like setting up
>> seamonkey so that when composing an email, it's in plain text rather
>> than html, a particular email account every user should have and a
>> number of other settings that need to be the same for all users.  For
>> Libreoffice, it would be the deployment of a macro for all users and
>> some making some settings.
>
> Well... Depends on the software. Some MUAs may store their settings to
> the registry, others to files. You'll have to figure out - it should
> work. Microsoft uses something like that to auto-deploy Outlook
> profiles to Windows domain users if an Exchange server is installed.
> Thunderbird uses a combination of registry and files. You could deploy
> a preconfigured Thunderbird profile to the users profile dir, then
> configure the proper profile path in the registry. Firefox works the
> same: Profile directory, reference to it in the registry.
>
> I think LibreOffice would work similar to MS Office: Just deploy proper
> files after figuring out its path. I once deployed OpenOffice macros
> that way to Linux X11 terminal users.

It's possible --- and tedious --- to copy a seamonkey profile to other
users.  Then you find you have a number of users who require a more or
less different setup, or you add more users later with a more or less
different profile, or you need to add something to the profile for all
users, and you're back to square one.

I'd find it very useful to be able to do settings for multiple users
with some sort of configuration software which allows me to make
settings for them from an administrative account: change a setting,
select the users it should apply to, apply it and be done with it.

The way it is now, I need to log in as every user that needs some change
of settings and do that for each of them over and over again.  This
already sucks with a handfull of users.  What do you do when you have
hundreds of users?



Re: [gentoo-user] Re: {OT} Allow work from home?

2016-03-04 Thread lee
Kai Krakow  writes:

> Am Sat, 20 Feb 2016 11:24:56 +0100
> schrieb lee :
>
>> > It uses some very clever ideas to place files into groups and into
>> > proper order - other than using file mod and access times like other
>> > defrag tools do (which even make the problem worse by doing so
>> > because this destroys locality of data even more).  
>> 
>> I've never heard of MyDefrag, I might try it out.  Does it make
>> updating any faster?
>
> Ah well, difficult question... Short answer: It uses countermeasures
> against performance after updates decreasing too fast. It does this by
> using a "gapped" on-disk file layout - leaving some gaps for Windows to
> put temporary files. By this, files don't become a far spread as
> usually during updates. But yes, it improves installation time.

What difference would that make with an SSD?

> Apparently it's unmaintained since a few years but it still does a good
> job. It was built upon a theory by a student about how to properly
> reorganize file layout on a spinning disk to stay at high performance
> as best as possible.

For spinning disks, I can see how it can be beneficial.

>> > But even SSDs can use _proper_ defragmentation from time to time for
>> > increased lifetime and performance (this is due to how the FTL works
>> > and because erase blocks are huge, I won't get into detail unless
>> > someone asks). This is why mydefrag also supports flash
>> > optimization. It works by moving as few files as possible while
>> > coalescing free space into big chunks which in turn relaxes
>> > pressure on the FTL and allows to have more free and continuous
>> > erase blocks which reduces early flash chip wear. A filled SSD with
>> > long usage history can certainly gain back some performance from
>> > this.  
>> 
>> How does it improve performance?  It seems to me that, for practical
>> use, almost all of the better performance with SSDs is due to reduced
>> latency.  And IIUC, it doesn't matter for the latency where data is
>> stored on an SSD.  If its performance degrades over time when data is
>> written to it, the SSD sucks, and the manufacturer should have done a
>> better job.  Why else would I buy an SSD.  If it needs to reorganise
>> the data stored on it, the firmware should do that.
>
> There are different factors which have impact on performance, not just
> seek times (which, as you write, is the worst performance breaker):
>
>   * management overhead: the OS has to do more house keeping, which
> (a) introduces more IOPS (which is the only relevant limiting
> factor for SSD) and (b) introduces more CPU cycles and data
> structure locking within the OS routines during performing IO which
> comes down to more CPU cycles spend during IO

How would that be reduced by defragmenting an SSD?

>   * erasing a block is where SSDs really suck at performance wise, plus
> blocks are essentially read-only once written - that's how flash
> works, a flash data block needs to be erased prior to being
> rewritten - and that is (compared to the rest of its performance) a
> really REALLY HUGE time factor

So let the SSD do it when it's idle.  For applications in which it isn't
idle enough, an SSD won't be the best solution.

>   * erase blocks are huge compared to common filesystem block sizes
> (erase block = 1 or 2 MB vs. file system block being 4-64k usually)
> which happens to result in this effect:
>
> - OS replaces a file by writing a new, deleting the old
>   (common during updates), or the user deletes files
> - OS marks some blocks as free in its FS structures, it depends on
>   the file size and its fragmentation if this gives you a
>   continuous area of free blocks or many small blocks scattered
>   across the disk: it results in free space fragmentation
> - free space fragments happen to become small over time, much
>   smaller then the erase block size
> - if your system has TRIM/discard support it will tell the SSD
>   firmware: here, I no longer use those 4k blocks
> - as you already figured out: those small blocks marked as free do
>   not properly align with the erase block size - so actually, you
>   may end up with a lot of free space but essentially no complete
>   erase block is marked as free

Use smaller erase blocks.

> - this situation means: the SSD firmware cannot reclaim this free
>   space to do "free block erasure" in advance so if you write
>   another block of small data you may end up with the SSD going
>   into a direct "read/modify/erase/write" cycle instead of just
>   "read/modify/write" and deferring the erasing until later - ah
>   yes, that's probably becoming slow then
> - what do we learn: (a) defragment free space from time to time,
>   (b) enable TRIM/discard to reclaim blocks in advance, (c) you may
>   want to over-provision your SSD: just don't ever use 

Re: [gentoo-user] Re: {OT} Allow work from home?

2016-02-21 Thread lee
Kai Krakow  writes:

> Am Wed, 20 Jan 2016 01:46:29 +0100
> schrieb lee :
>
>> The time before, it wasn't
>> a VM but a very slow machine, and that also took a week.  You can have
>> the fastest machine on the world and Windoze always manages to bring
>> it down to a slowness we wouldn't have accepted even 20 years ago.
>
> This is mainly an artifact of Windows updates destroying locality of
> data pretty fast and mainly a problem when running on spinning rust.
> DLLs and data files needed for booting or starting specific
> software become spread wide across the hard disk. Fragmentation isn't
> the issue here - NTFS is pretty good at keeping it low. Still, the
> right defragmentation tool will help you:

You can't very well defragment the disk while updates are being
performed.  Updating goes like this:


+ install from an installation media

+ tell the machine to update

+ come back next day and find out that it's still looking for updates or
  trying to download them or wants to be restarted

+ restart the machine

+ start over with the second step until all updates have been installed


That usually takes a week.  When it's finally done, disable all
automatic updates because if you don't, the machine usually becomes
unusable when it installs another update.

It doesn't matter if you have the fastest machine on the world or some
old hardware you wouldn't actually use anymore, it always takes about a
week.

> I always recommend staying away from the 1000 types of "tuning tools",
> they actually make it worse and take away your chance of properly
> optimizing the on-disk file layout.

I'm not worried about that.  One of the VMs is still on an SSD, so I
turned off defragging.  The other VMs that use files on a hard disk
defrag themselves regularly over night.

> And I always recommend using MyDefrag and using its system disk
> defrag profile to reorder the files in your hard disk. It takes ages
> the first time it runs but it brings back your system to almost out of
> the box boot and software startup time performance.

That hasn't been an issue with any of the VMs yet.

> It uses some very clever ideas to place files into groups and into
> proper order - other than using file mod and access times like other
> defrag tools do (which even make the problem worse by doing so because
> this destroys locality of data even more).

I've never heard of MyDefrag, I might try it out.  Does it make updating
any faster?

> But even SSDs can use _proper_ defragmentation from time to time for
> increased lifetime and performance (this is due to how the FTL works
> and because erase blocks are huge, I won't get into detail unless
> someone asks). This is why mydefrag also supports flash optimization.
> It works by moving as few files as possible while coalescing free space
> into big chunks which in turn relaxes pressure on the FTL and allows to
> have more free and continuous erase blocks which reduces early flash
> chip wear. A filled SSD with long usage history can certainly gain back
> some performance from this.

How does it improve performance?  It seems to me that, for practical
use, almost all of the better performance with SSDs is due to reduced
latency.  And IIUC, it doesn't matter for the latency where data is
stored on an SSD.  If its performance degrades over time when data is
written to it, the SSD sucks, and the manufacturer should have done a
better job.  Why else would I buy an SSD.  If it needs to reorganise the
data stored on it, the firmware should do that.



Re: [gentoo-user] Re: {OT} Allow work from home?

2016-02-21 Thread lee
Kai Krakow  writes:

> Am Fri, 22 Jan 2016 00:52:30 +0100
> schrieb lee :
>
>> Is WSUS of any use without domains?  If it is, I should take a look at
>> it.
>
> You can use it with and without domains. What domains give you through
> GPO is just automatic deployment of the needed registry settings in the
> client.
>
> You can simply create a proper .reg file and deploy it to the clients
> however you like. They will connect to WSUS and receive updates you
> control.
>
> No magic here.

Sounds good :)  Does it also solve the problem of having to make
settings for all users, like when setting up a MUA or Libreoffice?

That means settings on the same machine for all users, like setting up
seamonkey so that when composing an email, it's in plain text rather
than html, a particular email account every user should have and a
number of other settings that need to be the same for all users.  For
Libreoffice, it would be the deployment of a macro for all users and
some making some settings.



Re: [gentoo-user] Re: {OT} Allow work from home?

2016-02-21 Thread lee
Kai Krakow  writes:

> Am Wed, 20 Jan 2016 01:46:29 +0100
> schrieb lee :
>
>> >> Overcommitting disk space sounds like a very bad idea.
>> >> Overcommitting memory is not possible with xen.  
>> >
>> > Overcommitting diskspace isn't such a bad idea, considering most
>> > installs never utilize all the available diskspace.  
>> 
>> When they do not use it anyway, there is no reason to give it to them
>> in the first place.  And when they do use it, how do the VMs handle
>> the problem that they have plenty disk space available, from their
>> point of view, while the host which they don't know about doesn't
>> allow them to use it?
>> 
>> Besides, overcommitting disk space means to intentionally create a
>> setup which involves that the host can run out of disk space easily.
>> That is not something I would want to create for a host which is
>> required to function reliably.
>> 
>> And how much do you need to worry about the security of the VMs when
>> you build in a way for the users to bring the whole machine, or at
>> least random VMs, down by using the disk space which has been
>> assigned to them?  The users are somewhat likely to do that even
>> unintentionally, the more the more you overcommit.
>
> Overcommitting storage is for setups where it's easy to add storage
> pools when needed, like virtual SAN. You just monitor available space
> and when it falls below a threshold, just add more to the storage pool
> whose filesystem will grow.
>
> You just overcommit to whatever storage requirments you may ever need
> combined over all VMs but you initially only buy what you need to start
> with including short term expected growth.
>
> Then start with clones/snapshots from the same VM image (SANs provide
> that so you actually do not have to care about snapshot dependencies
> within your virtualization software).
>
> SANs usually also provide deduplication and compression, so at any
> point you can coalesce the images back into smaller storage
> requirements.
>
> A sane virtualization solution also provides RAM deduplication and
> compaction so that you can overcommit RAM the same way as storage. Of
> course it will at some point borrow RAM from swap space. Usually you
> will then just migrate one VM to some other hardware - even while it is
> running. If connected to a SAN this means: You don't have to move the
> VM images itself. The migration is almost instant: The old VM host acts
> as some sort of virtualized swap file holding the complete RAM, the new
> host just "swaps in" needed RAM blocks over network and migrates the
> rest during idle time in the background. This can even be automated by
> monitoring the resources and let the VM manager decide and act.
>
> The Linux kernel lately gained support for all this so you could
> probably even home-brew it.

Ok, that makes sense when you have more or less unlimited resources to
pay for all the hardware you need for this.  I wonder how much money
you'd have to put out to even get started with a setup like this ...



Re: [gentoo-user] Re: {OT} Allow work from home?

2016-01-22 Thread Grant
>> The answer to this may be an obvious "yes" but I've never done it so I'm
>> not sure.  Can I route requests from machine C through machine A only
>> for my domain name, and not involve A for C's other internet requests?
>> If so, where is that configured?
>
> While ZT can be used to route requests between networks, but it is mainly
> used to talk directly between clients. If A wants to talk to C over ZT,
> it uses C's ZT IP address.
>
> Here's a snippet from ifconfig on this machine, whch may help it make
> sense to you
>
> wlan0: flags=4163  mtu 1500
> inet 192.168.1.6  netmask 255.255.255.0  broadcast 192.168.1.255
> ether c4:8e:8f:f7:55:c9  txqueuelen 1000  (Ethernet)
>
> zt0: flags=4163  mtu 2800
> inet 10.252.252.6  netmask 255.255.255.0  broadcast 10.252.252.255
>
> To talk to this computer from another of my machines over ZT I would use
> the 10.252... address. If you tried that address, you'd get nowhere as
> you are not connected to my network.


So if 10.252.252.6 were configured as a router, could I join your ZT
network and use iptables to route my example.com 80/443 requests to
10.252.252.6, thereby granting me access to my web apps which are
configured to only allow your machine's WAN IP?

The first couple paragraphs here make it sound like a centralized SaaS
as far as the setup phase of the connection:

https://www.zerotier.com/blog/?p=577

Is it possible (easy?) to run your own "core node" and so not interact
with the official core nodes at all?

- Grant



Re: [gentoo-user] Re: {OT} Allow work from home?

2016-01-22 Thread Neil Bothwick
On Fri, 22 Jan 2016 11:51:45 -0800, Grant wrote:

> > To talk to this computer from another of my machines over ZT I would
> > use the 10.252... address. If you tried that address, you'd get
> > nowhere as you are not connected to my network.  

> So if 10.252.252.6 were configured as a router, could I join your ZT
> network and use iptables to route my example.com 80/443 requests to
> 10.252.252.6, thereby granting me access to my web apps which are
> configured to only allow your machine's WAN IP?

You don't need a bridge in a network to join it. If I want you to join
it, I give you the network ID and you simply join it, although you can't
actually connect to it until I authorise the connection.

However, if this machine were configured as a bridge, then once you had
joined my network you would have access to all of my LAN, rather like an
OpenVPN connection. It seems that the man difference between this and a
traditional VPN is that all of the setup work is done on the one
computer, connecting extra clients is just a matter of connecting them to
the network.

Note that I haven't actually tried this, every machine on my LAN that I
want to be able to connect to is running ZT so is directly accessible.

> Is it possible (easy?) to run your own "core node" and so not interact
> with the official core nodes at all?

It is definitely possible, and you skip the "only ten clients for
free" limit as that only applies to using their servers. Once again, it
isn't something I've tried yet, but it is on my list of "things to do
when I find some time". I'm quite happy using their discovery servers so
this would be only an exercise in trying it "because I can".


-- 
Neil Bothwick

MUPHRY'S LAW: The principle that any criticism of the writing of others
will itself contain at least one grammatical error.


pgpW52yseiUCN.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] Re: {OT} Allow work from home?

2016-01-22 Thread Neil Bothwick
On Thu, 21 Jan 2016 17:18:27 -0800, Grant wrote:

> > There is ZeroTier as a replacement for OpenVPN, and Syncthing for
> > syncing. Both are P2P solutions and you can run your own discovery
> > servers if you don't want any traffic going through a 3rd party
> > (although they don't send data through the servers).
> >
> > I've no idea whether that would meet your security criteria but it
> > certainly fulfils the "easier than OpenVPN" one. It will take only a
> > few minutes to install and setup using the public servers, although,
> > as I said, your network is never public, so you can check whether
> > they do what you want. Then you can look at hosting your own server
> > for security.
> >
> > https://www.zerotier.com/
> > https://syncthing.net/  

> Zerotier looks especially interesting.  Can I have machine A listen for
> Zerotier connections, have machine B connect to machine A via Zerotier,
> have machine C connect to machine A via Zerotier, and rsync push from B
> to C?

You set up a network and the machines all connect to that network, so A,
B and C can all talk to each other.

> Does connecting two machines via Zerotier involve any security
> considerations besides those involved when connecting those machines to
> the internet?  In other words, is it a simple network connection or are
> other privelages involved with that connection?

Connections are encrypted, handled by the ZeroTier protocols, but
otherwise it behaves like a normal network connection. 

> Can I somehow require the Zerotier connection between machines A and C
> in order for C to pass HTTP basic authentication on my web server which
> resides elsewhere?  Maybe I can route all traffic from machine C to my
> web server through C's Zerotier connection to A and lock down basic
> authentication on my web server to machine A?

Your ZeroTier connections are on a separate network, you pick an address
block when you set up the network but that network is only accessible to
other machines connected to your ZeroTier network. You can have ZT
allocate addresses within that block, it's not dynamic addressing because
one a client is given an address, it always gets the same address, or you
can specify the address for each client. So you can include an address
requirement in your .htaccess to ensure connections are only allowed from
your ZT network.


-- 
Neil Bothwick

furbling, v.:
Having to wander through a maze of ropes at an airport or bank
even when you are the only person in line.
-- Rich Hall, "Sniglets"


pgpklv_NXtiAS.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] Re: {OT} Allow work from home?

2016-01-22 Thread Neil Bothwick
On Fri, 22 Jan 2016 07:52:12 -0500, Rich Freeman wrote:

> My understanding is that ZT does not support routing of any kind.
> Traffic destined to a ZT peer goes directly to that peer, and that's
> it.  You can't route over ZT and onto a subnet on a remote peer's
> network, or from one peer to another, or anything like that.

You can set up one machine on a LAN as a bridge, that then connects your
ZT clients to the LAN, much like a traditional VPN.


-- 
Neil Bothwick

I used to have a handle on life, then it broke.


pgp7GPkC8pqjF.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] Re: {OT} Allow work from home?

2016-01-22 Thread Rich Freeman
On Fri, Jan 22, 2016 at 7:29 AM, Grant  wrote:
>
> The answer to this may be an obvious "yes" but I've never done it so I'm not
> sure.  Can I route requests from machine C through machine A only for my
> domain name, and not involve A for C's other internet requests?  If so,
> where is that configured?
>
> BTW, how did you find ZT?  Pity there's no ebuild yet.
>

My understanding is that ZT does not support routing of any kind.
Traffic destined to a ZT peer goes directly to that peer, and that's
it.  You can't route over ZT and onto a subnet on a remote peer's
network, or from one peer to another, or anything like that.

So, ZT isn't even capable of routing internet traffic right now, so
none of it will go over ZT.

For other VPNs it is all IP and routing works however you define it on
either side.  You can make a VPN your default route, or not, etc.  You
can do whatever iproute2/iptables/etc allows on linux hosts.  I
imagine windows is a bit less flexible but I'm sure you can define
which interface is the default route.

-- 
Rich



Re: [gentoo-user] Re: {OT} Allow work from home?

2016-01-22 Thread Neil Bothwick
On Fri, 22 Jan 2016 04:29:00 -0800, Grant wrote:

> The answer to this may be an obvious "yes" but I've never done it so I'm
> not sure.  Can I route requests from machine C through machine A only
> for my domain name, and not involve A for C's other internet requests?
> If so, where is that configured?

While ZT can be used to route requests between networks, but it is mainly
used to talk directly between clients. If A wants to talk to C over ZT,
it uses C's ZT IP address.

Here's a snippet from ifconfig on this machine, whch may help it make
sense to you

wlan0: flags=4163  mtu 1500
inet 192.168.1.6  netmask 255.255.255.0  broadcast 192.168.1.255
ether c4:8e:8f:f7:55:c9  txqueuelen 1000  (Ethernet)

zt0: flags=4163  mtu 2800
inet 10.252.252.6  netmask 255.255.255.0  broadcast 10.252.252.255

To talk to this computer from another of my machines over ZT I would use
the 10.252... address. If you tried that address, you'd get nowhere as
you are not connected to my network.

Set up a network and play with it. It costs nothing to set up a network
with up to 10 clients. The main benefit is that it is so easy to
administer and add new clients. If you use it between two machines in the
same LAN, the traffic doesn't go outside of the LAN, so it works at more
or less the same speed.

> BTW, how did you find ZT?  Pity there's no ebuild yet.

Someone mentioned it during a talk at Liverpool LUG. It wasn't the topic
of the talk, he just used it to grab something from his home network to
answer a question. An ebuild would be nice, but the installer script
works perfectly here, both for systemd and openrc systems.


-- 
Neil Bothwick

In the 60's people took acid to make the world weird.
Now the world is weird and people take Prozac to make it normal.


pgp9qNW4XkBZc.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] Re: {OT} Allow work from home?

2016-01-20 Thread Grant
> As a independent consultant, most companies over the years frown on remote
> work. So I've mostly gotten stuck driving a lot, or working on things nobody
> else (sane) would touch. So one does develop thick skin; but most of this
> work was engineering hardware or embedded systems. It's even worse if you
> are an employee. So, in the past I just dedicated a windoze machine
> and linux machine where needed on fresh installs for their peace of mind.
> Granted, I only had a few customers at any given time, so traditional
> backups completed the remote work environment. I'd like to move into 2016
> and the cloud using the latest of what is available for remote workers.
>
>
> So for 18 months now, I have been poking around extensively in the
> cluster/cloud space. Remote work is mostly mandatory; it fits in with their
> business model and devops needs. Since January, 2016, I've had an explosion
> of remote opportunities, to the point that something fundamental here in the
> US has changed with remote work. So Kudos to Grant for starting this thread
> and I deeply appreciate what everyone has contributed. I am hoping that the
> 'corporate folks' have a solution for remote workers (employees  or
> contractors) so I do not have to be responsible for that security design of
> the remote component. I have my doubts. There is also an dramatic up-tick in
> using gentoo in cluster/cloud solutions from my perspective. When I suggest
> folks benchmark their codes on the platforms they are running on and then
> gentoo underneath, most ceed that ground without testing. The few that do
> test, once they get past the bitching on installing gentoo, are quite amazed
> at the performance gains using gentoo under their cluster/cloud.
>
>
> What I hope is that a companion-reference  iptables/nftable configuration
> and the options from this thread make it to the gentoo wiki. I have have
> static IPs at home and fiber  so a solution for that scenario is keenly
> appreciated, just in case the companies I work for do not have something
> robust that allows a gentoo workstation to be a remote work companion to
> whatever they use (windoze, chrome, apple, etc) for a secure solution via
> remote work connections.


This is really interesting stuff, thank you James.

- Grant



Re: [gentoo-user] Re: {OT} Allow work from home?

2016-01-16 Thread Daniel Frey
On 01/16/2016 07:48 AM, Grant Edwards wrote:
>>
>> I've set up my home server to act as a Windows-type terminal server
>> using X and tigervnc.
> 
> OK, there you're running the X server and client on the same machine,
> but the server is using VNC to display remotely.  That works.  Just
> don't try to do it the "right" way -- the way X was intended to work.
> 

Yes, I was aware the "right" way wouldn't work for what I was trying to
do. To be honest, I never tested this over a VPN, I usually use it
internally when I'm moving big files around on the server. I used the
shell for the longest time but when you are copying files that don't
easily into a wildcard pattern, it's just easier to click them in the
GUI and copy/move them. That was the whole reason I set it up. The nice
thing is that everything runs on the server on my local LAN this way,
the only thing needed is tigervnc (well, and a VPN setup) on the client.

I've been running this setup for at least seven years (probably longer,
I don't remember when I set it up originally) now, with no major issues.
I actually just ran into one recently (like two weeks ago) - the new
version of tigervnc doesn't work in the manner I've set up with the
latest stable Xorg. Instead of troubleshooting, I just masked them and
everything is running normally.

I actually used a forum thread in the Docs, Tips, and Tricks forum[1] to
get it set up initially.


Dan

[1] https://forums.gentoo.org/viewtopic-t-72893-highlight-xvnc.html