[gentoo-user] Re: {OT} Allow work from home?

2016-03-06 Thread Kai Krakow
Am Sat, 05 Mar 2016 00:52:09 +0100
schrieb lee :

> >> > It uses some very clever ideas to place files into groups and
> >> > into proper order - other than using file mod and access times
> >> > like other defrag tools do (which even make the problem worse by
> >> > doing so because this destroys locality of data even more).
> >> 
> >> I've never heard of MyDefrag, I might try it out.  Does it make
> >> updating any faster?  
> >
> > Ah well, difficult question... Short answer: It uses countermeasures
> > against performance after updates decreasing too fast. It does this
> > by using a "gapped" on-disk file layout - leaving some gaps for
> > Windows to put temporary files. By this, files don't become a far
> > spread as usually during updates. But yes, it improves installation
> > time.  
> 
> What difference would that make with an SSD?

Well, those gapps are by a good chance a trimmed erase block, so it can
be served fast by the SSD firmware. Of course, the same applies if your
OS is using discard commands to mark free blocks and you still have
enough free space in the FS. So, actually, for SSDs it probably makes
no difference.

> > Apparently it's unmaintained since a few years but it still does a
> > good job. It was built upon a theory by a student about how to
> > properly reorganize file layout on a spinning disk to stay at high
> > performance as best as possible.  
> 
> For spinning disks, I can see how it can be beneficial.

My comment was targetted at this.

> >> > But even SSDs can use _proper_ defragmentation from time to time
> >> > for increased lifetime and performance (this is due to how the
> >> > FTL works and because erase blocks are huge, I won't get into
> >> > detail unless someone asks). This is why mydefrag also supports
> >> > flash optimization. It works by moving as few files as possible
> >> > while coalescing free space into big chunks which in turn relaxes
> >> > pressure on the FTL and allows to have more free and continuous
> >> > erase blocks which reduces early flash chip wear. A filled SSD
> >> > with long usage history can certainly gain back some performance
> >> > from this.
> >> 
> >> How does it improve performance?  It seems to me that, for
> >> practical use, almost all of the better performance with SSDs is
> >> due to reduced latency.  And IIUC, it doesn't matter for the
> >> latency where data is stored on an SSD.  If its performance
> >> degrades over time when data is written to it, the SSD sucks, and
> >> the manufacturer should have done a better job.  Why else would I
> >> buy an SSD.  If it needs to reorganise the data stored on it, the
> >> firmware should do that.  
> >
> > There are different factors which have impact on performance, not
> > just seek times (which, as you write, is the worst performance
> > breaker):
> >
> >   * management overhead: the OS has to do more house keeping, which
> > (a) introduces more IOPS (which is the only relevant limiting
> > factor for SSD) and (b) introduces more CPU cycles and data
> > structure locking within the OS routines during performing IO
> > which comes down to more CPU cycles spend during IO  
> 
> How would that be reduced by defragmenting an SSD?

FS structures are coalesced back into simpler structures by
defragmenting, e.g. btrfs creates a huge overhead by splitting extents
due to its COW nature. Doing a defrag here combines this back into
fewer extents. It's reported on the btrfs list that this CAN make a big
difference even for SSD, tho usually you only see the performance loss
with heavily fragmented files like VM images - so recommendation here
is to set those files nocow.

> >   * erasing a block is where SSDs really suck at performance wise,
> > plus blocks are essentially read-only once written - that's how
> > flash works, a flash data block needs to be erased prior to being
> > rewritten - and that is (compared to the rest of its
> > performance) a really REALLY HUGE time factor  
> 
> So let the SSD do it when it's idle.  For applications in which it
> isn't idle enough, an SSD won't be the best solution.

That's probably true - haven't thought of this.

> >   * erase blocks are huge compared to common filesystem block sizes
> > (erase block = 1 or 2 MB vs. file system block being 4-64k
> > usually) which happens to result in this effect:
> >
> > - OS replaces a file by writing a new, deleting the old
> >   (common during updates), or the user deletes files
> > - OS marks some blocks as free in its FS structures, it depends
> > on the file size and its fragmentation if this gives you a
> >   continuous area of free blocks or many small blocks scattered
> >   across the disk: it results in free space fragmentation
> > - free space fragments happen to become small over time, much
> >   smaller then the erase block size
> > - if your system has TRIM/discard support it will tell the SSD
> >   firmware: here, I 

Re: [gentoo-user] Re: {OT} Allow work from home?

2016-03-04 Thread lee
Kai Krakow  writes:

> Am Sat, 20 Feb 2016 10:48:57 +0100
> schrieb lee :
>
>> Kai Krakow  writes:
>> 
>> > Am Fri, 22 Jan 2016 00:52:30 +0100
>> > schrieb lee :
>> >
>> >> Is WSUS of any use without domains?  If it is, I should take a
>> >> look at it.
>> >
>> > You can use it with and without domains. What domains give you
>> > through GPO is just automatic deployment of the needed registry
>> > settings in the client.
>> >
>> > You can simply create a proper .reg file and deploy it to the
>> > clients however you like. They will connect to WSUS and receive
>> > updates you control.
>> >
>> > No magic here.
>> 
>> Sounds good :)  Does it also solve the problem of having to make
>> settings for all users, like when setting up a MUA or Libreoffice?
>> 
>> That means settings on the same machine for all users, like setting up
>> seamonkey so that when composing an email, it's in plain text rather
>> than html, a particular email account every user should have and a
>> number of other settings that need to be the same for all users.  For
>> Libreoffice, it would be the deployment of a macro for all users and
>> some making some settings.
>
> Well... Depends on the software. Some MUAs may store their settings to
> the registry, others to files. You'll have to figure out - it should
> work. Microsoft uses something like that to auto-deploy Outlook
> profiles to Windows domain users if an Exchange server is installed.
> Thunderbird uses a combination of registry and files. You could deploy
> a preconfigured Thunderbird profile to the users profile dir, then
> configure the proper profile path in the registry. Firefox works the
> same: Profile directory, reference to it in the registry.
>
> I think LibreOffice would work similar to MS Office: Just deploy proper
> files after figuring out its path. I once deployed OpenOffice macros
> that way to Linux X11 terminal users.

It's possible --- and tedious --- to copy a seamonkey profile to other
users.  Then you find you have a number of users who require a more or
less different setup, or you add more users later with a more or less
different profile, or you need to add something to the profile for all
users, and you're back to square one.

I'd find it very useful to be able to do settings for multiple users
with some sort of configuration software which allows me to make
settings for them from an administrative account: change a setting,
select the users it should apply to, apply it and be done with it.

The way it is now, I need to log in as every user that needs some change
of settings and do that for each of them over and over again.  This
already sucks with a handfull of users.  What do you do when you have
hundreds of users?



Re: [gentoo-user] Re: {OT} Allow work from home?

2016-03-04 Thread lee
Kai Krakow  writes:

> Am Sat, 20 Feb 2016 11:24:56 +0100
> schrieb lee :
>
>> > It uses some very clever ideas to place files into groups and into
>> > proper order - other than using file mod and access times like other
>> > defrag tools do (which even make the problem worse by doing so
>> > because this destroys locality of data even more).  
>> 
>> I've never heard of MyDefrag, I might try it out.  Does it make
>> updating any faster?
>
> Ah well, difficult question... Short answer: It uses countermeasures
> against performance after updates decreasing too fast. It does this by
> using a "gapped" on-disk file layout - leaving some gaps for Windows to
> put temporary files. By this, files don't become a far spread as
> usually during updates. But yes, it improves installation time.

What difference would that make with an SSD?

> Apparently it's unmaintained since a few years but it still does a good
> job. It was built upon a theory by a student about how to properly
> reorganize file layout on a spinning disk to stay at high performance
> as best as possible.

For spinning disks, I can see how it can be beneficial.

>> > But even SSDs can use _proper_ defragmentation from time to time for
>> > increased lifetime and performance (this is due to how the FTL works
>> > and because erase blocks are huge, I won't get into detail unless
>> > someone asks). This is why mydefrag also supports flash
>> > optimization. It works by moving as few files as possible while
>> > coalescing free space into big chunks which in turn relaxes
>> > pressure on the FTL and allows to have more free and continuous
>> > erase blocks which reduces early flash chip wear. A filled SSD with
>> > long usage history can certainly gain back some performance from
>> > this.  
>> 
>> How does it improve performance?  It seems to me that, for practical
>> use, almost all of the better performance with SSDs is due to reduced
>> latency.  And IIUC, it doesn't matter for the latency where data is
>> stored on an SSD.  If its performance degrades over time when data is
>> written to it, the SSD sucks, and the manufacturer should have done a
>> better job.  Why else would I buy an SSD.  If it needs to reorganise
>> the data stored on it, the firmware should do that.
>
> There are different factors which have impact on performance, not just
> seek times (which, as you write, is the worst performance breaker):
>
>   * management overhead: the OS has to do more house keeping, which
> (a) introduces more IOPS (which is the only relevant limiting
> factor for SSD) and (b) introduces more CPU cycles and data
> structure locking within the OS routines during performing IO which
> comes down to more CPU cycles spend during IO

How would that be reduced by defragmenting an SSD?

>   * erasing a block is where SSDs really suck at performance wise, plus
> blocks are essentially read-only once written - that's how flash
> works, a flash data block needs to be erased prior to being
> rewritten - and that is (compared to the rest of its performance) a
> really REALLY HUGE time factor

So let the SSD do it when it's idle.  For applications in which it isn't
idle enough, an SSD won't be the best solution.

>   * erase blocks are huge compared to common filesystem block sizes
> (erase block = 1 or 2 MB vs. file system block being 4-64k usually)
> which happens to result in this effect:
>
> - OS replaces a file by writing a new, deleting the old
>   (common during updates), or the user deletes files
> - OS marks some blocks as free in its FS structures, it depends on
>   the file size and its fragmentation if this gives you a
>   continuous area of free blocks or many small blocks scattered
>   across the disk: it results in free space fragmentation
> - free space fragments happen to become small over time, much
>   smaller then the erase block size
> - if your system has TRIM/discard support it will tell the SSD
>   firmware: here, I no longer use those 4k blocks
> - as you already figured out: those small blocks marked as free do
>   not properly align with the erase block size - so actually, you
>   may end up with a lot of free space but essentially no complete
>   erase block is marked as free

Use smaller erase blocks.

> - this situation means: the SSD firmware cannot reclaim this free
>   space to do "free block erasure" in advance so if you write
>   another block of small data you may end up with the SSD going
>   into a direct "read/modify/erase/write" cycle instead of just
>   "read/modify/write" and deferring the erasing until later - ah
>   yes, that's probably becoming slow then
> - what do we learn: (a) defragment free space from time to time,
>   (b) enable TRIM/discard to reclaim blocks in advance, (c) you may
>   want to over-provision your SSD: just don't ever use 

[gentoo-user] Re: {OT} Allow work from home?

2016-02-23 Thread Kai Krakow
Am Sat, 20 Feb 2016 11:24:56 +0100
schrieb lee :

> > It uses some very clever ideas to place files into groups and into
> > proper order - other than using file mod and access times like other
> > defrag tools do (which even make the problem worse by doing so
> > because this destroys locality of data even more).  
> 
> I've never heard of MyDefrag, I might try it out.  Does it make
> updating any faster?

Ah well, difficult question... Short answer: It uses countermeasures
against performance after updates decreasing too fast. It does this by
using a "gapped" on-disk file layout - leaving some gaps for Windows to
put temporary files. By this, files don't become a far spread as
usually during updates. But yes, it improves installation time.

Apparently it's unmaintained since a few years but it still does a good
job. It was built upon a theory by a student about how to properly
reorganize file layout on a spinning disk to stay at high performance
as best as possible.

> > But even SSDs can use _proper_ defragmentation from time to time for
> > increased lifetime and performance (this is due to how the FTL works
> > and because erase blocks are huge, I won't get into detail unless
> > someone asks). This is why mydefrag also supports flash
> > optimization. It works by moving as few files as possible while
> > coalescing free space into big chunks which in turn relaxes
> > pressure on the FTL and allows to have more free and continuous
> > erase blocks which reduces early flash chip wear. A filled SSD with
> > long usage history can certainly gain back some performance from
> > this.  
> 
> How does it improve performance?  It seems to me that, for practical
> use, almost all of the better performance with SSDs is due to reduced
> latency.  And IIUC, it doesn't matter for the latency where data is
> stored on an SSD.  If its performance degrades over time when data is
> written to it, the SSD sucks, and the manufacturer should have done a
> better job.  Why else would I buy an SSD.  If it needs to reorganise
> the data stored on it, the firmware should do that.

There are different factors which have impact on performance, not just
seek times (which, as you write, is the worst performance breaker):

  * management overhead: the OS has to do more house keeping, which
(a) introduces more IOPS (which is the only relevant limiting
factor for SSD) and (b) introduces more CPU cycles and data
structure locking within the OS routines during performing IO which
comes down to more CPU cycles spend during IO

  * erasing a block is where SSDs really suck at performance wise, plus
blocks are essentially read-only once written - that's how flash
works, a flash data block needs to be erased prior to being
rewritten - and that is (compared to the rest of its performance) a
really REALLY HUGE time factor

  * erase blocks are huge compared to common filesystem block sizes
(erase block = 1 or 2 MB vs. file system block being 4-64k usually)
which happens to result in this effect:

- OS replaces a file by writing a new, deleting the old
  (common during updates), or the user deletes files
- OS marks some blocks as free in its FS structures, it depends on
  the file size and its fragmentation if this gives you a
  continuous area of free blocks or many small blocks scattered
  across the disk: it results in free space fragmentation
- free space fragments happen to become small over time, much
  smaller then the erase block size
- if your system has TRIM/discard support it will tell the SSD
  firmware: here, I no longer use those 4k blocks
- as you already figured out: those small blocks marked as free do
  not properly align with the erase block size - so actually, you
  may end up with a lot of free space but essentially no complete
  erase block is marked as free
- this situation means: the SSD firmware cannot reclaim this free
  space to do "free block erasure" in advance so if you write
  another block of small data you may end up with the SSD going
  into a direct "read/modify/erase/write" cycle instead of just
  "read/modify/write" and deferring the erasing until later - ah
  yes, that's probably becoming slow then
- what do we learn: (a) defragment free space from time to time,
  (b) enable TRIM/discard to reclaim blocks in advance, (c) you may
  want to over-provision your SSD: just don't ever use 10-15% of
  your SSD, trim that space, and leave it there for the firmware to
  shuffle erase blocks around
- the latter point also increases life-time for obvious reasons as
  SSDs only support a limited count of write-cycles per block
- this "shuffling around" blocks is called wear-levelling: the
  firmware chooses a block candidate with the least write cycles
  for doing "read/modify/write"

So, SSDs actually do this "reorganization" as you call 

[gentoo-user] Re: {OT} Allow work from home?

2016-02-23 Thread Kai Krakow
Am Sat, 20 Feb 2016 10:48:57 +0100
schrieb lee :

> Kai Krakow  writes:
> 
> > Am Fri, 22 Jan 2016 00:52:30 +0100
> > schrieb lee :
> >
> >> Is WSUS of any use without domains?  If it is, I should take a
> >> look at it.
> >
> > You can use it with and without domains. What domains give you
> > through GPO is just automatic deployment of the needed registry
> > settings in the client.
> >
> > You can simply create a proper .reg file and deploy it to the
> > clients however you like. They will connect to WSUS and receive
> > updates you control.
> >
> > No magic here.
> 
> Sounds good :)  Does it also solve the problem of having to make
> settings for all users, like when setting up a MUA or Libreoffice?
> 
> That means settings on the same machine for all users, like setting up
> seamonkey so that when composing an email, it's in plain text rather
> than html, a particular email account every user should have and a
> number of other settings that need to be the same for all users.  For
> Libreoffice, it would be the deployment of a macro for all users and
> some making some settings.

Well... Depends on the software. Some MUAs may store their settings to
the registry, others to files. You'll have to figure out - it should
work. Microsoft uses something like that to auto-deploy Outlook
profiles to Windows domain users if an Exchange server is installed.
Thunderbird uses a combination of registry and files. You could deploy
a preconfigured Thunderbird profile to the users profile dir, then
configure the proper profile path in the registry. Firefox works the
same: Profile directory, reference to it in the registry.

I think LibreOffice would work similar to MS Office: Just deploy proper
files after figuring out its path. I once deployed OpenOffice macros
that way to Linux X11 terminal users.

-- 
Regards,
Kai

Replies to list-only preferred.




Re: [gentoo-user] Re: {OT} Allow work from home?

2016-02-21 Thread lee
Kai Krakow  writes:

> Am Wed, 20 Jan 2016 01:46:29 +0100
> schrieb lee :
>
>> The time before, it wasn't
>> a VM but a very slow machine, and that also took a week.  You can have
>> the fastest machine on the world and Windoze always manages to bring
>> it down to a slowness we wouldn't have accepted even 20 years ago.
>
> This is mainly an artifact of Windows updates destroying locality of
> data pretty fast and mainly a problem when running on spinning rust.
> DLLs and data files needed for booting or starting specific
> software become spread wide across the hard disk. Fragmentation isn't
> the issue here - NTFS is pretty good at keeping it low. Still, the
> right defragmentation tool will help you:

You can't very well defragment the disk while updates are being
performed.  Updating goes like this:


+ install from an installation media

+ tell the machine to update

+ come back next day and find out that it's still looking for updates or
  trying to download them or wants to be restarted

+ restart the machine

+ start over with the second step until all updates have been installed


That usually takes a week.  When it's finally done, disable all
automatic updates because if you don't, the machine usually becomes
unusable when it installs another update.

It doesn't matter if you have the fastest machine on the world or some
old hardware you wouldn't actually use anymore, it always takes about a
week.

> I always recommend staying away from the 1000 types of "tuning tools",
> they actually make it worse and take away your chance of properly
> optimizing the on-disk file layout.

I'm not worried about that.  One of the VMs is still on an SSD, so I
turned off defragging.  The other VMs that use files on a hard disk
defrag themselves regularly over night.

> And I always recommend using MyDefrag and using its system disk
> defrag profile to reorder the files in your hard disk. It takes ages
> the first time it runs but it brings back your system to almost out of
> the box boot and software startup time performance.

That hasn't been an issue with any of the VMs yet.

> It uses some very clever ideas to place files into groups and into
> proper order - other than using file mod and access times like other
> defrag tools do (which even make the problem worse by doing so because
> this destroys locality of data even more).

I've never heard of MyDefrag, I might try it out.  Does it make updating
any faster?

> But even SSDs can use _proper_ defragmentation from time to time for
> increased lifetime and performance (this is due to how the FTL works
> and because erase blocks are huge, I won't get into detail unless
> someone asks). This is why mydefrag also supports flash optimization.
> It works by moving as few files as possible while coalescing free space
> into big chunks which in turn relaxes pressure on the FTL and allows to
> have more free and continuous erase blocks which reduces early flash
> chip wear. A filled SSD with long usage history can certainly gain back
> some performance from this.

How does it improve performance?  It seems to me that, for practical
use, almost all of the better performance with SSDs is due to reduced
latency.  And IIUC, it doesn't matter for the latency where data is
stored on an SSD.  If its performance degrades over time when data is
written to it, the SSD sucks, and the manufacturer should have done a
better job.  Why else would I buy an SSD.  If it needs to reorganise the
data stored on it, the firmware should do that.



Re: [gentoo-user] Re: {OT} Allow work from home?

2016-02-21 Thread lee
Kai Krakow  writes:

> Am Fri, 22 Jan 2016 00:52:30 +0100
> schrieb lee :
>
>> Is WSUS of any use without domains?  If it is, I should take a look at
>> it.
>
> You can use it with and without domains. What domains give you through
> GPO is just automatic deployment of the needed registry settings in the
> client.
>
> You can simply create a proper .reg file and deploy it to the clients
> however you like. They will connect to WSUS and receive updates you
> control.
>
> No magic here.

Sounds good :)  Does it also solve the problem of having to make
settings for all users, like when setting up a MUA or Libreoffice?

That means settings on the same machine for all users, like setting up
seamonkey so that when composing an email, it's in plain text rather
than html, a particular email account every user should have and a
number of other settings that need to be the same for all users.  For
Libreoffice, it would be the deployment of a macro for all users and
some making some settings.



Re: [gentoo-user] Re: {OT} Allow work from home?

2016-02-21 Thread lee
Kai Krakow  writes:

> Am Wed, 20 Jan 2016 01:46:29 +0100
> schrieb lee :
>
>> >> Overcommitting disk space sounds like a very bad idea.
>> >> Overcommitting memory is not possible with xen.  
>> >
>> > Overcommitting diskspace isn't such a bad idea, considering most
>> > installs never utilize all the available diskspace.  
>> 
>> When they do not use it anyway, there is no reason to give it to them
>> in the first place.  And when they do use it, how do the VMs handle
>> the problem that they have plenty disk space available, from their
>> point of view, while the host which they don't know about doesn't
>> allow them to use it?
>> 
>> Besides, overcommitting disk space means to intentionally create a
>> setup which involves that the host can run out of disk space easily.
>> That is not something I would want to create for a host which is
>> required to function reliably.
>> 
>> And how much do you need to worry about the security of the VMs when
>> you build in a way for the users to bring the whole machine, or at
>> least random VMs, down by using the disk space which has been
>> assigned to them?  The users are somewhat likely to do that even
>> unintentionally, the more the more you overcommit.
>
> Overcommitting storage is for setups where it's easy to add storage
> pools when needed, like virtual SAN. You just monitor available space
> and when it falls below a threshold, just add more to the storage pool
> whose filesystem will grow.
>
> You just overcommit to whatever storage requirments you may ever need
> combined over all VMs but you initially only buy what you need to start
> with including short term expected growth.
>
> Then start with clones/snapshots from the same VM image (SANs provide
> that so you actually do not have to care about snapshot dependencies
> within your virtualization software).
>
> SANs usually also provide deduplication and compression, so at any
> point you can coalesce the images back into smaller storage
> requirements.
>
> A sane virtualization solution also provides RAM deduplication and
> compaction so that you can overcommit RAM the same way as storage. Of
> course it will at some point borrow RAM from swap space. Usually you
> will then just migrate one VM to some other hardware - even while it is
> running. If connected to a SAN this means: You don't have to move the
> VM images itself. The migration is almost instant: The old VM host acts
> as some sort of virtualized swap file holding the complete RAM, the new
> host just "swaps in" needed RAM blocks over network and migrates the
> rest during idle time in the background. This can even be automated by
> monitoring the resources and let the VM manager decide and act.
>
> The Linux kernel lately gained support for all this so you could
> probably even home-brew it.

Ok, that makes sense when you have more or less unlimited resources to
pay for all the hardware you need for this.  I wonder how much money
you'd have to put out to even get started with a setup like this ...



Re: [gentoo-user] Re: {OT} Allow work from home?

2016-01-22 Thread Grant
>> The answer to this may be an obvious "yes" but I've never done it so I'm
>> not sure.  Can I route requests from machine C through machine A only
>> for my domain name, and not involve A for C's other internet requests?
>> If so, where is that configured?
>
> While ZT can be used to route requests between networks, but it is mainly
> used to talk directly between clients. If A wants to talk to C over ZT,
> it uses C's ZT IP address.
>
> Here's a snippet from ifconfig on this machine, whch may help it make
> sense to you
>
> wlan0: flags=4163  mtu 1500
> inet 192.168.1.6  netmask 255.255.255.0  broadcast 192.168.1.255
> ether c4:8e:8f:f7:55:c9  txqueuelen 1000  (Ethernet)
>
> zt0: flags=4163  mtu 2800
> inet 10.252.252.6  netmask 255.255.255.0  broadcast 10.252.252.255
>
> To talk to this computer from another of my machines over ZT I would use
> the 10.252... address. If you tried that address, you'd get nowhere as
> you are not connected to my network.


So if 10.252.252.6 were configured as a router, could I join your ZT
network and use iptables to route my example.com 80/443 requests to
10.252.252.6, thereby granting me access to my web apps which are
configured to only allow your machine's WAN IP?

The first couple paragraphs here make it sound like a centralized SaaS
as far as the setup phase of the connection:

https://www.zerotier.com/blog/?p=577

Is it possible (easy?) to run your own "core node" and so not interact
with the official core nodes at all?

- Grant



[gentoo-user] Re: {OT} Allow work from home?

2016-01-22 Thread Kai Krakow
Am Wed, 20 Jan 2016 01:46:29 +0100
schrieb lee :

> The time before, it wasn't
> a VM but a very slow machine, and that also took a week.  You can have
> the fastest machine on the world and Windoze always manages to bring
> it down to a slowness we wouldn't have accepted even 20 years ago.

This is mainly an artifact of Windows updates destroying locality of
data pretty fast and mainly a problem when running on spinning rust.
DLLs and data files needed for booting or starting specific
software become spread wide across the hard disk. Fragmentation isn't
the issue here - NTFS is pretty good at keeping it low. Still, the
right defragmentation tool will help you: I always recommend staying
away from the 1000 types of "tuning tools", they actually make it worse
and take away your chance of properly optimizing the on-disk file
layout. And I always recommend using MyDefrag and using its system disk
defrag profile to reorder the files in your hard disk. It takes ages
the first time it runs but it brings back your system to almost out of
the box boot and software startup time performance. It uses some very
clever ideas to place files into groups and into proper order - other
than using file mod and access times like other defrag tools do (which
even make the problem worse by doing so because this destroys locality
of data even more).

But even SSDs can use _proper_ defragmentation from time to time for
increased lifetime and performance (this is due to how the FTL works
and because erase blocks are huge, I won't get into detail unless
someone asks). This is why mydefrag also supports flash optimization.
It works by moving as few files as possible while coalescing free space
into big chunks which in turn relaxes pressure on the FTL and allows to
have more free and continuous erase blocks which reduces early flash
chip wear. A filled SSD with long usage history can certainly gain back
some performance from this.

-- 
Regards,
Kai

Replies to list-only preferred.




Re: [gentoo-user] Re: {OT} Allow work from home?

2016-01-22 Thread Neil Bothwick
On Fri, 22 Jan 2016 11:51:45 -0800, Grant wrote:

> > To talk to this computer from another of my machines over ZT I would
> > use the 10.252... address. If you tried that address, you'd get
> > nowhere as you are not connected to my network.  

> So if 10.252.252.6 were configured as a router, could I join your ZT
> network and use iptables to route my example.com 80/443 requests to
> 10.252.252.6, thereby granting me access to my web apps which are
> configured to only allow your machine's WAN IP?

You don't need a bridge in a network to join it. If I want you to join
it, I give you the network ID and you simply join it, although you can't
actually connect to it until I authorise the connection.

However, if this machine were configured as a bridge, then once you had
joined my network you would have access to all of my LAN, rather like an
OpenVPN connection. It seems that the man difference between this and a
traditional VPN is that all of the setup work is done on the one
computer, connecting extra clients is just a matter of connecting them to
the network.

Note that I haven't actually tried this, every machine on my LAN that I
want to be able to connect to is running ZT so is directly accessible.

> Is it possible (easy?) to run your own "core node" and so not interact
> with the official core nodes at all?

It is definitely possible, and you skip the "only ten clients for
free" limit as that only applies to using their servers. Once again, it
isn't something I've tried yet, but it is on my list of "things to do
when I find some time". I'm quite happy using their discovery servers so
this would be only an exercise in trying it "because I can".


-- 
Neil Bothwick

MUPHRY'S LAW: The principle that any criticism of the writing of others
will itself contain at least one grammatical error.


pgpW52yseiUCN.pgp
Description: OpenPGP digital signature


[gentoo-user] Re: {OT} Allow work from home?

2016-01-22 Thread Kai Krakow
Am Fri, 22 Jan 2016 00:52:30 +0100
schrieb lee :

> Is WSUS of any use without domains?  If it is, I should take a look at
> it.

You can use it with and without domains. What domains give you through
GPO is just automatic deployment of the needed registry settings in the
client.

You can simply create a proper .reg file and deploy it to the clients
however you like. They will connect to WSUS and receive updates you
control.

No magic here.

-- 
Regards,
Kai

Replies to list-only preferred.




Re: [gentoo-user] Re: {OT} Allow work from home?

2016-01-22 Thread Neil Bothwick
On Thu, 21 Jan 2016 17:18:27 -0800, Grant wrote:

> > There is ZeroTier as a replacement for OpenVPN, and Syncthing for
> > syncing. Both are P2P solutions and you can run your own discovery
> > servers if you don't want any traffic going through a 3rd party
> > (although they don't send data through the servers).
> >
> > I've no idea whether that would meet your security criteria but it
> > certainly fulfils the "easier than OpenVPN" one. It will take only a
> > few minutes to install and setup using the public servers, although,
> > as I said, your network is never public, so you can check whether
> > they do what you want. Then you can look at hosting your own server
> > for security.
> >
> > https://www.zerotier.com/
> > https://syncthing.net/  

> Zerotier looks especially interesting.  Can I have machine A listen for
> Zerotier connections, have machine B connect to machine A via Zerotier,
> have machine C connect to machine A via Zerotier, and rsync push from B
> to C?

You set up a network and the machines all connect to that network, so A,
B and C can all talk to each other.

> Does connecting two machines via Zerotier involve any security
> considerations besides those involved when connecting those machines to
> the internet?  In other words, is it a simple network connection or are
> other privelages involved with that connection?

Connections are encrypted, handled by the ZeroTier protocols, but
otherwise it behaves like a normal network connection. 

> Can I somehow require the Zerotier connection between machines A and C
> in order for C to pass HTTP basic authentication on my web server which
> resides elsewhere?  Maybe I can route all traffic from machine C to my
> web server through C's Zerotier connection to A and lock down basic
> authentication on my web server to machine A?

Your ZeroTier connections are on a separate network, you pick an address
block when you set up the network but that network is only accessible to
other machines connected to your ZeroTier network. You can have ZT
allocate addresses within that block, it's not dynamic addressing because
one a client is given an address, it always gets the same address, or you
can specify the address for each client. So you can include an address
requirement in your .htaccess to ensure connections are only allowed from
your ZT network.


-- 
Neil Bothwick

furbling, v.:
Having to wander through a maze of ropes at an airport or bank
even when you are the only person in line.
-- Rich Hall, "Sniglets"


pgpklv_NXtiAS.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] Re: {OT} Allow work from home?

2016-01-22 Thread Neil Bothwick
On Fri, 22 Jan 2016 07:52:12 -0500, Rich Freeman wrote:

> My understanding is that ZT does not support routing of any kind.
> Traffic destined to a ZT peer goes directly to that peer, and that's
> it.  You can't route over ZT and onto a subnet on a remote peer's
> network, or from one peer to another, or anything like that.

You can set up one machine on a LAN as a bridge, that then connects your
ZT clients to the LAN, much like a traditional VPN.


-- 
Neil Bothwick

I used to have a handle on life, then it broke.


pgp7GPkC8pqjF.pgp
Description: OpenPGP digital signature


[gentoo-user] Re: {OT} Allow work from home?

2016-01-22 Thread Grant
>
> > Zerotier looks especially interesting.  Can I have machine A listen for
> > Zerotier connections, have machine B connect to machine A via Zerotier,
> > have machine C connect to machine A via Zerotier, and rsync push from B
> > to C?
>
> You set up a network and the machines all connect to that network, so A,
> B and C can all talk to each other.
>
> > Does connecting two machines via Zerotier involve any security
> > considerations besides those involved when connecting those machines to
> > the internet?  In other words, is it a simple network connection or are
> > other privelages involved with that connection?
>
> Connections are encrypted, handled by the ZeroTier protocols, but
> otherwise it behaves like a normal network connection.
>
> > Can I somehow require the Zerotier connection between machines A and C
> > in order for C to pass HTTP basic authentication on my web server which
> > resides elsewhere?  Maybe I can route all traffic from machine C to my
> > web server through C's Zerotier connection to A and lock down basic
> > authentication on my web server to machine A?
>
> Your ZeroTier connections are on a separate network, you pick an address
> block when you set up the network but that network is only accessible to
> other machines connected to your ZeroTier network. You can have ZT
> allocate addresses within that block, it's not dynamic addressing because
> one a client is given an address, it always gets the same address, or you
> can specify the address for each client. So you can include an address
> requirement in your .htaccess to ensure connections are only allowed from
> your ZT network.
>


The answer to this may be an obvious "yes" but I've never done it so I'm
not sure.  Can I route requests from machine C through machine A only for
my domain name, and not involve A for C's other internet requests?  If so,
where is that configured?

BTW, how did you find ZT?  Pity there's no ebuild yet.

- Grant


Re: [gentoo-user] Re: {OT} Allow work from home?

2016-01-22 Thread Rich Freeman
On Fri, Jan 22, 2016 at 7:29 AM, Grant  wrote:
>
> The answer to this may be an obvious "yes" but I've never done it so I'm not
> sure.  Can I route requests from machine C through machine A only for my
> domain name, and not involve A for C's other internet requests?  If so,
> where is that configured?
>
> BTW, how did you find ZT?  Pity there's no ebuild yet.
>

My understanding is that ZT does not support routing of any kind.
Traffic destined to a ZT peer goes directly to that peer, and that's
it.  You can't route over ZT and onto a subnet on a remote peer's
network, or from one peer to another, or anything like that.

So, ZT isn't even capable of routing internet traffic right now, so
none of it will go over ZT.

For other VPNs it is all IP and routing works however you define it on
either side.  You can make a VPN your default route, or not, etc.  You
can do whatever iproute2/iptables/etc allows on linux hosts.  I
imagine windows is a bit less flexible but I'm sure you can define
which interface is the default route.

-- 
Rich



Re: [gentoo-user] Re: {OT} Allow work from home?

2016-01-22 Thread Neil Bothwick
On Fri, 22 Jan 2016 04:29:00 -0800, Grant wrote:

> The answer to this may be an obvious "yes" but I've never done it so I'm
> not sure.  Can I route requests from machine C through machine A only
> for my domain name, and not involve A for C's other internet requests?
> If so, where is that configured?

While ZT can be used to route requests between networks, but it is mainly
used to talk directly between clients. If A wants to talk to C over ZT,
it uses C's ZT IP address.

Here's a snippet from ifconfig on this machine, whch may help it make
sense to you

wlan0: flags=4163  mtu 1500
inet 192.168.1.6  netmask 255.255.255.0  broadcast 192.168.1.255
ether c4:8e:8f:f7:55:c9  txqueuelen 1000  (Ethernet)

zt0: flags=4163  mtu 2800
inet 10.252.252.6  netmask 255.255.255.0  broadcast 10.252.252.255

To talk to this computer from another of my machines over ZT I would use
the 10.252... address. If you tried that address, you'd get nowhere as
you are not connected to my network.

Set up a network and play with it. It costs nothing to set up a network
with up to 10 clients. The main benefit is that it is so easy to
administer and add new clients. If you use it between two machines in the
same LAN, the traffic doesn't go outside of the LAN, so it works at more
or less the same speed.

> BTW, how did you find ZT?  Pity there's no ebuild yet.

Someone mentioned it during a talk at Liverpool LUG. It wasn't the topic
of the talk, he just used it to grab something from his home network to
answer a question. An ebuild would be nice, but the installer script
works perfectly here, both for systemd and openrc systems.


-- 
Neil Bothwick

In the 60's people took acid to make the world weird.
Now the world is weird and people take Prozac to make it normal.


pgp9qNW4XkBZc.pgp
Description: OpenPGP digital signature


[gentoo-user] Re: {OT} Allow work from home?

2016-01-22 Thread James
Neil Bothwick  digimed.co.uk> writes:


> > The answer to this may be an obvious "yes" but I've never done it so I'm
> > not sure.  Can I route requests from machine C through machine A only
> > for my domain name, and not involve A for C's other internet requests?
> > If so, where is that configured?

>From what I read, 10 nodes or less are free. I'd be willing to participate
as a remote node so a small group of gentoo users can figured this out and 
document some example configurations, as it seems to be very interesting and
useful. Additionally, a custom set of iptables rules or a bridge-filter
would be keen information to add to a gentoo wiki page on this topic, imho.


This could also be a wonderful way for proxy-maintainers to hang in a group
and work more closely on things like digesting EAPI-6 and teaming up on
more complex ebuild issues. It does like sound fun! 


James








[gentoo-user] Re: {OT} Allow work from home?

2016-01-21 Thread Kai Krakow
Am Wed, 20 Jan 2016 01:46:29 +0100
schrieb lee :

> >> Overcommitting disk space sounds like a very bad idea.
> >> Overcommitting memory is not possible with xen.  
> >
> > Overcommitting diskspace isn't such a bad idea, considering most
> > installs never utilize all the available diskspace.  
> 
> When they do not use it anyway, there is no reason to give it to them
> in the first place.  And when they do use it, how do the VMs handle
> the problem that they have plenty disk space available, from their
> point of view, while the host which they don't know about doesn't
> allow them to use it?
> 
> Besides, overcommitting disk space means to intentionally create a
> setup which involves that the host can run out of disk space easily.
> That is not something I would want to create for a host which is
> required to function reliably.
> 
> And how much do you need to worry about the security of the VMs when
> you build in a way for the users to bring the whole machine, or at
> least random VMs, down by using the disk space which has been
> assigned to them?  The users are somewhat likely to do that even
> unintentionally, the more the more you overcommit.

Overcommitting storage is for setups where it's easy to add storage
pools when needed, like virtual SAN. You just monitor available space
and when it falls below a threshold, just add more to the storage pool
whose filesystem will grow.

You just overcommit to whatever storage requirments you may ever need
combined over all VMs but you initially only buy what you need to start
with including short term expected growth.

Then start with clones/snapshots from the same VM image (SANs provide
that so you actually do not have to care about snapshot dependencies
within your virtualization software).

SANs usually also provide deduplication and compression, so at any
point you can coalesce the images back into smaller storage
requirements.

A sane virtualization solution also provides RAM deduplication and
compaction so that you can overcommit RAM the same way as storage. Of
course it will at some point borrow RAM from swap space. Usually you
will then just migrate one VM to some other hardware - even while it is
running. If connected to a SAN this means: You don't have to move the
VM images itself. The migration is almost instant: The old VM host acts
as some sort of virtualized swap file holding the complete RAM, the new
host just "swaps in" needed RAM blocks over network and migrates the
rest during idle time in the background. This can even be automated by
monitoring the resources and let the VM manager decide and act.

The Linux kernel lately gained support for all this so you could
probably even home-brew it.

-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: {OT} Allow work from home?

2016-01-21 Thread Kai Krakow
Am Wed, 20 Jan 2016 08:12:33 +0100
schrieb "J. Roeleveld" :

> > > Overcommitting memory is, i think, on the roadmap for Xen.
> > > (Disclaimer: At least, I seem to remember reading that
> > > somewhere)  
> > 
> > That would be a nice feature.  
> 
> For VDIs, I might consider using it.
> But considering most OSs tend to fill up all available memory with
> caches, I expect performance issues.

This is what memory ballooning is for: To reclaim tight memory
resources from discardable OS memory. The host cache will take care of
doing proper caching.

-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: {OT} Allow work from home?

2016-01-21 Thread Grant
>
> > I would
> > need to be able to rsync to the laptop and I'd rather not be involved
> > in the remote employee's router config.  Is there an easier solution
> > for that than OpenVPN?
>
> There is ZeroTier as a replacement for OpenVPN, and Syncthing for
> syncing. Both are P2P solutions and you can run your own discovery
> servers if you don't want any traffic going through a 3rd party (although
> they don't send data through the servers).
>
> I've no idea whether that would meet your security criteria but it
> certainly fulfils the "easier than OpenVPN" one. It will take only a few
> minutes to install and setup using the public servers, although, as I
> said, your network is never public, so you can check whether they do what
> you want. Then you can look at hosting your own server for security.
>
> https://www.zerotier.com/
> https://syncthing.net/



Zerotier looks especially interesting.  Can I have machine A listen for
Zerotier connections, have machine B connect to machine A via Zerotier,
have machine C connect to machine A via Zerotier, and rsync push from B to
C?

Does connecting two machines via Zerotier involve any security
considerations besides those involved when connecting those machines to the
internet?  In other words, is it a simple network connection or are other
privelages involved with that connection?

Can I somehow require the Zerotier connection between machines A and C in
order for C to pass HTTP basic authentication on my web server which
resides elsewhere?  Maybe I can route all traffic from machine C to my web
server through C's Zerotier connection to A and lock down basic
authentication on my web server to machine A?

- Grant


[gentoo-user] Re: {OT} Allow work from home?

2016-01-20 Thread James
lee  yagibdah.de> writes:


> > Windows has RDP, which is a lot better than VNC. Especially when 
> > dealing with low-bandwidth connections.
> 
> Wasn't RPD deprecated earlier in this discussion because it seemed to be
> not sufficiently secure?

Has anyone had experience with Thinlinc ? [1] 

It seems to have commerial (binary) support as well as a list of open source
components one can use. Is it secure, I don't know, I just ran across it.


As a independent consultant, most companies over the years frown on remote
work. So I've mostly gotten stuck driving a lot, or working on things nobody
else (sane) would touch. So one does develop thick skin; but most of this
work was engineering hardware or embedded systems. It's even worse if you
are an employee. So, in the past I just dedicated a windoze machine
and linux machine where needed on fresh installs for their peace of mind.
Granted, I only had a few customers at any given time, so traditional
backups completed the remote work environment. I'd like to move into 2016
and the cloud using the latest of what is available for remote workers.


So for 18 months now, I have been poking around extensively in the
cluster/cloud space. Remote work is mostly mandatory; it fits in with their
business model and devops needs. Since January, 2016, I've had an explosion
of remote opportunities, to the point that something fundamental here in the
US has changed with remote work. So Kudos to Grant for starting this thread
and I deeply appreciate what everyone has contributed. I am hoping that the
'corporate folks' have a solution for remote workers (employees  or
contractors) so I do not have to be responsible for that security design of
the remote component. I have my doubts. There is also an dramatic up-tick in
using gentoo in cluster/cloud solutions from my perspective. When I suggest
folks benchmark their codes on the platforms they are running on and then
gentoo underneath, most ceed that ground without testing. The few that do
test, once they get past the bitching on installing gentoo, are quite amazed
at the performance gains using gentoo under their cluster/cloud.


What I hope is that a companion-reference  iptables/nftable configuration
and the options from this thread make it to the gentoo wiki. I have have
static IPs at home and fiber  so a solution for that scenario is keenly
appreciated, just in case the companies I work for do not have something
robust that allows a gentoo workstation to be a remote work companion to
whatever they use (windoze, chrome, apple, etc) for a secure solution via
remote work connections.

(thanks again Grant),
James

[1] http://www.cendio.com/seamlessrdp











Re: [gentoo-user] Re: {OT} Allow work from home?

2016-01-20 Thread Grant
> As a independent consultant, most companies over the years frown on remote
> work. So I've mostly gotten stuck driving a lot, or working on things nobody
> else (sane) would touch. So one does develop thick skin; but most of this
> work was engineering hardware or embedded systems. It's even worse if you
> are an employee. So, in the past I just dedicated a windoze machine
> and linux machine where needed on fresh installs for their peace of mind.
> Granted, I only had a few customers at any given time, so traditional
> backups completed the remote work environment. I'd like to move into 2016
> and the cloud using the latest of what is available for remote workers.
>
>
> So for 18 months now, I have been poking around extensively in the
> cluster/cloud space. Remote work is mostly mandatory; it fits in with their
> business model and devops needs. Since January, 2016, I've had an explosion
> of remote opportunities, to the point that something fundamental here in the
> US has changed with remote work. So Kudos to Grant for starting this thread
> and I deeply appreciate what everyone has contributed. I am hoping that the
> 'corporate folks' have a solution for remote workers (employees  or
> contractors) so I do not have to be responsible for that security design of
> the remote component. I have my doubts. There is also an dramatic up-tick in
> using gentoo in cluster/cloud solutions from my perspective. When I suggest
> folks benchmark their codes on the platforms they are running on and then
> gentoo underneath, most ceed that ground without testing. The few that do
> test, once they get past the bitching on installing gentoo, are quite amazed
> at the performance gains using gentoo under their cluster/cloud.
>
>
> What I hope is that a companion-reference  iptables/nftable configuration
> and the options from this thread make it to the gentoo wiki. I have have
> static IPs at home and fiber  so a solution for that scenario is keenly
> appreciated, just in case the companies I work for do not have something
> robust that allows a gentoo workstation to be a remote work companion to
> whatever they use (windoze, chrome, apple, etc) for a secure solution via
> remote work connections.


This is really interesting stuff, thank you James.

- Grant



[gentoo-user] Re: {OT} Allow work from home?

2016-01-19 Thread Kai Krakow
Am Tue, 19 Jan 2016 19:39:26 + (UTC)
schrieb Grant Edwards :

> On 2016-01-19, Mick  wrote:
> 
> > As far as I understand it RDP is different to VNC, in the sense that
> > instead of sending every pixel down the line it only sends
> > compressed semantic information *about* a desktop component
> > (e.g. the start button, a control signal, etc.) and the client
> > interprets this locally as a button or a control command. It is also
> > using caching to minimise retransmission.
> 
> I don't think so.  AFAICT, RDP (a-la Windows) and VNC both do exaclty
> the same thing: they send display pixel info to be displayed.  They
> try to optimize the process by only sending deltas and by using
> various compression schemes, but they're both doing the basically the
> same thing.  RDP also has a bunch of other stuff to support things
> like audio, printer, filesystem, and serial/prallel port redirection
> that I don't think VNC ever had.  But the display/mouse/keyboard part
> of it works pretty much the same.

Well, RDP indeed sends bitmaps. But it can do it a lot more intelligent
and desktop aware than VNC. First, it supports bitmap caching and can
reuse bitmaps which were already sent - which in itself is quite a good
compression for usual desktop content. It also supports a wide variety
of compression types. It can also encode the fact that bitmaps have
moved thus only require sending the background of a window which was
moved - and can reuse bitmaps from the cache.

RDP also can detect and send content as a video stream. It also
supports sending graphical desktop effects using 3d acceleration and
transparency.

It also knows of glyphs (font rendering) which thus have not to be sent
as bitmaps (which due to font smoothing may not be compressible well).

Xfreerdp is a nice implementation which implements almost all those
features. I was able to use it to smoothly operate a remote Windows
desktop with Aero effects enabled. The latency was very low, the
experience was almost the same as working physically in front of the
machine. Of course, the remote end has to have a sufficiently new RDP
server implementation (like Windows 8 or Server 2012). It also
supports folder, printer, sound, and port redirection. It may also
support the new Windows RDP UDP transport which works more like a video
stream encoder and sacrifies immediate image quality for low latency. I
haven't tried it with xfreerdp. In Windows, it is very nice for high
latency links where it catches up image quality after 1 or 2 seconds or
so.

There's also demos where you can remotely play Diablo 3 on Windows using
a Linux RDP client - with low latency, sound and good image quality. I
doubt VNC could do that although the claim "basically the same". [1]

VNC just cannot do it. It even sometimes does not transfer small screen
updates like a blinking cursor - let alone the mouse pointer only
following on clicks. It also doesn't support catching up better image
quality in a deferred way to keep latency low. It's either slow, or
visually unpleasing at best. It's also annoying that it's bound to the
physical screen resolution of the remote machine. VNC was only good
back in WinXP times, when RDP was not much more than VNC in terms of
screen content transfer, and network links were generally much slower
than today, and VNC had some intelligent compression algos in contrast
to RDP. VNC just doesn't seem to be able to make use of low latency and
high bandwidth links - it still feels sluggish and slow. It's probably a
protocol implementation issue (not streaming and synchronous).

Given that, I'd say: No, it's _not_ "basically" the same. RDP is just
much more than simple bitmap transfer - even if we exclude advanced
features like sound, file transfer, clipboard sharing etc and stick to
the common features.

BTW: As far as I know, a wayland display server will be able to expose
an RDP framebuffer which you could connect to from Windows RDP clients,
and should also support smooth desktop effects and video encoding at
some time in the future. I followed that topic for a while but given
the fact that wayland is just not there yet, making it impossible for
me to use it on my daily desktop, I've given up on that. I'll try to
get back to that later. But as far as I understood, unlike Windows RDP,
a wayland RDP framebuffer does not mirror a physical screen - it just a
virtual framebuffer.

[1]: https://www.youtube.com/watch?v=RUXYuj9S1v8

-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: {OT} Allow work from home?

2016-01-19 Thread Grant Edwards
On 2016-01-19, Mick  wrote:

> As far as I understand it RDP is different to VNC, in the sense that
> instead of sending every pixel down the line it only sends
> compressed semantic information *about* a desktop component
> (e.g. the start button, a control signal, etc.) and the client
> interprets this locally as a button or a control command. It is also
> using caching to minimise retransmission.

I don't think so.  AFAICT, RDP (a-la Windows) and VNC both do exaclty
the same thing: they send display pixel info to be displayed.  They
try to optimize the process by only sending deltas and by using
various compression schemes, but they're both doing the basically the
same thing.  RDP also has a bunch of other stuff to support things
like audio, printer, filesystem, and serial/prallel port redirection
that I don't think VNC ever had.  But the display/mouse/keyboard part
of it works pretty much the same.

-- 
Grant Edwards   grant.b.edwardsYow! I hope something GOOD
  at   came in the mail today so
  gmail.comI have a REASON to live!!




[gentoo-user] Re: {OT} Allow work from home?

2016-01-19 Thread Nikos Chantziaras

On 16/01/16 06:17, Grant wrote:

I'm considering allowing some employees to work from home but I'm
concerned about the security implications.  Currently everybody shows up
and logs into their locked down Gentoo system and from there is able to
access the company webapps which are restricted to the office IP
address.  I guess I would have to allow webapp access from any IP for
those users and trust that their computer is secure?  Should that not be
scary?


I've set up such systems using OpenVPN, as others have suggested.

One thing to look out for, is to make sure that the setup only tunnels 
traffic to your servers, not ALL traffic. Otherwise, all traffic from 
your people is going to be tunneled through your network (Netflix, 
torrents, porn, everything else your people are doing at home.)






[gentoo-user] Re: {OT} Allow work from home?

2016-01-16 Thread Grant Edwards
On 2016-01-16, Daniel Frey  wrote:

> I would use VPN + an X server that can spawn sessions on demand. This
> way it all stays internal on the work network.

One caveat: the way X11 was intended to work in this situation is that
you run the X11 clients on the secure machine in the office, and run
the X11 server on the remote machine in the worker's home.  But, in my
experience, it's been decades since remote X sessions could be used
for anything other than xterms and emacs.  All the "modern" GUI
toolkits (GTK, Qt, etc.) have been designed with the assumption that
the X11 server and client are co-resident on the same machine.  Even
the most trivial operations in those toolkits involve so many
round-trips between server and client that there's an intolerable
multi-second latency over a WAN connection (these days it barely works
though a 100M LAN).

It's a shame, because that used to be one of the big wins in the X11
architecture.

OTOH, there are other remote desktop options that work much better.

> I do something similar at work for our Windows clients, it was
> simple to set up there.
>
> I've set up my home server to act as a Windows-type terminal server
> using X and tigervnc.

OK, there you're running the X server and client on the same machine,
but the server is using VNC to display remotely.  That works.  Just
don't try to do it the "right" way -- the way X was intended to work.

> It actually works well, but I never got into multiuser and dealing
> with logon scripts and the like (you may or may not need this to
> deal with user documents and the like.)

--
Grant






Re: [gentoo-user] Re: {OT} Allow work from home?

2016-01-16 Thread Daniel Frey
On 01/16/2016 07:48 AM, Grant Edwards wrote:
>>
>> I've set up my home server to act as a Windows-type terminal server
>> using X and tigervnc.
> 
> OK, there you're running the X server and client on the same machine,
> but the server is using VNC to display remotely.  That works.  Just
> don't try to do it the "right" way -- the way X was intended to work.
> 

Yes, I was aware the "right" way wouldn't work for what I was trying to
do. To be honest, I never tested this over a VPN, I usually use it
internally when I'm moving big files around on the server. I used the
shell for the longest time but when you are copying files that don't
easily into a wildcard pattern, it's just easier to click them in the
GUI and copy/move them. That was the whole reason I set it up. The nice
thing is that everything runs on the server on my local LAN this way,
the only thing needed is tigervnc (well, and a VPN setup) on the client.

I've been running this setup for at least seven years (probably longer,
I don't remember when I set it up originally) now, with no major issues.
I actually just ran into one recently (like two weeks ago) - the new
version of tigervnc doesn't work in the manner I've set up with the
latest stable Xorg. Instead of troubleshooting, I just masked them and
everything is running normally.

I actually used a forum thread in the Docs, Tips, and Tricks forum[1] to
get it set up initially.


Dan

[1] https://forums.gentoo.org/viewtopic-t-72893-highlight-xvnc.html