Re: [pfSense] ZFS on 2.4.2

2018-03-06 Thread Walter Parker
On Mon, Mar 5, 2018 at 6:38 PM, Curtis Maurand  wrote:

> ZFS is a memory hog.   you need 1 GB of RAM for each TB of disk.


Curtis, can you provide some more details? I have been testing this for the
last couple of weeks and ZFS doesn't require 1G for each TB to function
(which is the standard meaning of need).
>From my direct testing and experience 1G per TB is a rule of thumb for
suggested memory sizing on general purpose servers. Do you have specific
information that violating this rule of thumb will cause functional issues?

To be more blunt, was this a case of drive by nerd sniping or do you know
something that will cause my specific use case to fail at some point in the
future?


Walter



> On 3/1/2018 1:49 AM, Walter Parker wrote:
>
>> Forgot to CC the list.
>>
>> On Wed, Feb 28, 2018 at 10:13 PM, Walter Parker 
>> wrote:
>>
>> Thank you for the backup script.
>>>
>>> By my calculations, 2G should be enough. If I limit the ARC cache to 1G,
>>> that leaves 1G for applications & kernel memory. As I'm not serving the
>>> 6TB
>>> drive up as a file server, but using it for one specific task (to receive
>>> the backups from one host) I figure that I don't need lots of memory. ZFS
>>> as a quick file server or busy server needs lots of memory to be quick.
>>> I've seen testing showing ZFS doing fast file copies on as little as 768M
>>> total system after proper memory tuning.
>>>
>>> I need ZFS because it is the only file system that can receive
>>> incremental
>>> ZFS snapshots and apply them. I have not setup the ZFS backup software
>>> yes,
>>> so I'm just using rsnapshot. First time it ran, it filled all 1G of the
>>> cache. I rebooted the firewall afterwards and now ZFS with 60-100M of
>>> usage
>>> (the amount of data that rsync updates on a daily basis is pretty small).
>>> Right now, the data from the other server is ~8.8G, compressed to 1.7G
>>> with
>>> lz4.
>>>
>>> When I get the full backup running, I will be ~1.5TB in size. ZFS
>>> snapshots should be pretty small and quick (as it can send just the data
>>> that was updated without having to walk the entire filesystem). An rsync
>>> backup would have to walk the whole system to find all of the changes.
>>> Most
>>> of the data on the system doesn't change (as it is a media library).
>>>
>>> I'll post back more results if people are interested, after I get the
>>> backup software working (I'm thinking about using ZapZend).
>>>
>>>
>>> Walter
>>>
>>>
>>>
>>> On Wed, Feb 28, 2018 at 8:54 PM, ED Fochler 
>>> wrote:
>>>
>>> I feel like I'm late in responding to this, but I have to say that 2GB of
 RAM doesn't seem like nearly enough for a 6TB zfs volume.  ZFS is great
 in
 a lot of ways, but is a RAM consuming monster.  For something RAM
 limited
 like the 2220 I'd use a different, simpler file format.  Then I'd use
 rsync
 based snapshots.

 Here's my personal backup script.  :-)  I haven't tried it FROM pfsense,
 but I've used it to back up pfsense.

  ED.





 On 2018, Feb 21, at 12:23 PM, Walter Parker  wrote:
>
> Hi,
>
> I have 2.4.2 installed on an SG-2220 from Netgate [nice box]. I just
>
 bought

> a 6TB powered USB drive from Costco and it works great (the drive has
>
 its

> own power supply and a USB hub). I want to use it take ZFS backups from
>
 my

> home server.
>
> I edited /boot/loader.conf.local and /etc/rc.conf.local to load ZFS on
>
 boot

> and created a pool and a file system. That worked, but the memory ran
>
 low

> so I restricted the ARC cache to 1G to keep a bit more memory free and
> rebooted. When the system rebooted it did not remount the pool (and
> therefore the file system) because the pool what marked as in use by
> another system (itself). That means that the pool was not properly
> exported/umounted at shutdown.
>
> Taking a quick look a rc.shutdown, I notice that it calls a customized
> pfsense shutdown script at the beginning and then exits. Is there a
> good
> place in the configuration where I can put/call the proper zfs shutdown
> script so that the pool is properly stopped/exported so that it imports
> correctly on boot?
>
>
> Walter
>
> --
> The greatest dangers to liberty lurk in insidious encroachment by men
> of
> zeal, well-meaning but without understanding.   -- Justice Louis D.
>
 Brandeis

> ___
> pfSense mailing list
> https://lists.pfsense.org/mailman/listinfo/list
> Support the project with Gold! https://pfsense.org/gold
>



>>> --
>>> The greatest dangers to liberty lurk in insidious encroachment by men of
>>> zeal, well-meaning but without understanding.   -- Justice Louis D.
>>> Brandeis
>>>
>>>
>>
>>
> --
> Best Regards
> Curtis Maurand
> Princip

Re: [pfSense] ZFS on 2.4.2

2018-03-06 Thread Paul Mather
On Mar 6, 2018, at 12:39 PM, Walter Parker  wrote:

> On Mon, Mar 5, 2018 at 6:38 PM, Curtis Maurand  wrote:
> 
>> ZFS is a memory hog.   you need 1 GB of RAM for each TB of disk.
> 
> 
> Curtis, can you provide some more details? I have been testing this for the
> last couple of weeks and ZFS doesn't require 1G for each TB to function
> (which is the standard meaning of need).
> From my direct testing and experience 1G per TB is a rule of thumb for
> suggested memory sizing on general purpose servers. Do you have specific
> information that violating this rule of thumb will cause functional issues?
> 
> To be more blunt, was this a case of drive by nerd sniping or do you know
> something that will cause my specific use case to fail at some point in the
> future?


The "1G for each TB" sounds like the rule of thumb for when you plan to enable 
deduplication on a dataset.  ZFS deduplication can be a disastrous memory hog 
(or else completely ruin your performance if you don't have sufficient ARC 
memory/resources), which is why many people do not enable it unless they've 
made a serious conscious decision to do so.

I ran ZFS on a 1--2 GB RAM FreeBSD/i386 system for years and it was stable.  I 
have to tune KVM and restrict ARC RAM consumption, but once I did that I had no 
problems.  It's my experience that ZFS is more stable and tested on 
FreeBSD/amd64.

Cheers,

Paul.


> 
> 
> Walter
> 
> 
> 
>> On 3/1/2018 1:49 AM, Walter Parker wrote:
>> 
>>> Forgot to CC the list.
>>> 
>>> On Wed, Feb 28, 2018 at 10:13 PM, Walter Parker 
>>> wrote:
>>> 
>>> Thank you for the backup script.
 
 By my calculations, 2G should be enough. If I limit the ARC cache to 1G,
 that leaves 1G for applications & kernel memory. As I'm not serving the
 6TB
 drive up as a file server, but using it for one specific task (to receive
 the backups from one host) I figure that I don't need lots of memory. ZFS
 as a quick file server or busy server needs lots of memory to be quick.
 I've seen testing showing ZFS doing fast file copies on as little as 768M
 total system after proper memory tuning.
 
 I need ZFS because it is the only file system that can receive
 incremental
 ZFS snapshots and apply them. I have not setup the ZFS backup software
 yes,
 so I'm just using rsnapshot. First time it ran, it filled all 1G of the
 cache. I rebooted the firewall afterwards and now ZFS with 60-100M of
 usage
 (the amount of data that rsync updates on a daily basis is pretty small).
 Right now, the data from the other server is ~8.8G, compressed to 1.7G
 with
 lz4.
 
 When I get the full backup running, I will be ~1.5TB in size. ZFS
 snapshots should be pretty small and quick (as it can send just the data
 that was updated without having to walk the entire filesystem). An rsync
 backup would have to walk the whole system to find all of the changes.
 Most
 of the data on the system doesn't change (as it is a media library).
 
 I'll post back more results if people are interested, after I get the
 backup software working (I'm thinking about using ZapZend).
 
 
 Walter
 
 
 
 On Wed, Feb 28, 2018 at 8:54 PM, ED Fochler 
 wrote:
 
 I feel like I'm late in responding to this, but I have to say that 2GB of
> RAM doesn't seem like nearly enough for a 6TB zfs volume.  ZFS is great
> in
> a lot of ways, but is a RAM consuming monster.  For something RAM
> limited
> like the 2220 I'd use a different, simpler file format.  Then I'd use
> rsync
> based snapshots.
> 
> Here's my personal backup script.  :-)  I haven't tried it FROM pfsense,
> but I've used it to back up pfsense.
> 
> ED.
> 
> 
> 
> 
> 
> On 2018, Feb 21, at 12:23 PM, Walter Parker  wrote:
>> 
>> Hi,
>> 
>> I have 2.4.2 installed on an SG-2220 from Netgate [nice box]. I just
>> 
> bought
> 
>> a 6TB powered USB drive from Costco and it works great (the drive has
>> 
> its
> 
>> own power supply and a USB hub). I want to use it take ZFS backups from
>> 
> my
> 
>> home server.
>> 
>> I edited /boot/loader.conf.local and /etc/rc.conf.local to load ZFS on
>> 
> boot
> 
>> and created a pool and a file system. That worked, but the memory ran
>> 
> low
> 
>> so I restricted the ARC cache to 1G to keep a bit more memory free and
>> rebooted. When the system rebooted it did not remount the pool (and
>> therefore the file system) because the pool what marked as in use by
>> another system (itself). That means that the pool was not properly
>> exported/umounted at shutdown.
>> 
>> Taking a quick look a rc.shutdown, I notice that it calls a customized
>> pfsense shutdown script at the beginning and then exits. Is there a
>> good

Re: [pfSense] ZFS on 2.4.2

2018-03-06 Thread Peder Rovelstad
Here's a ZFS tuning guide if you have not seen.
https://wiki.freebsd.org/ZFSTuningGuide 

But only goes to v9.

 

Down the page they ref 2-5GB/TB for dedupe.  Free advice, worth every penny
paid!

 

https://www.freebsd.org/doc/en/books/faq/all-about-zfs.html

 

My NAS4Free server uses 90% of its 4GB RAM for a 3TB volume, configured with
1.75GB arc_max.

 

-Original Message-
From: List [mailto:list-boun...@lists.pfsense.org] On Behalf Of Paul Mather
Sent: Tuesday, March 6, 2018 12:09 PM
To: pfSense Support and Discussion Mailing List 
Subject: Re: [pfSense] ZFS on 2.4.2

 

On Mar 6, 2018, at 12:39 PM, Walter Parker < 
walt...@gmail.com> wrote:

 

> On Mon, Mar 5, 2018 at 6:38 PM, Curtis Maurand <
 cmaur...@xyonet.com> wrote:

> 

>> ZFS is a memory hog.   you need 1 GB of RAM for each TB of disk.

> 

> 

> Curtis, can you provide some more details? I have been testing this 

> for the last couple of weeks and ZFS doesn't require 1G for each TB to 

> function (which is the standard meaning of need).

> From my direct testing and experience 1G per TB is a rule of thumb for 

> suggested memory sizing on general purpose servers. Do you have 

> specific information that violating this rule of thumb will cause
functional issues?

> 

> To be more blunt, was this a case of drive by nerd sniping or do you 

> know something that will cause my specific use case to fail at some 

> point in the future?

 

 

The "1G for each TB" sounds like the rule of thumb for when you plan to
enable deduplication on a dataset.  ZFS deduplication can be a disastrous
memory hog (or else completely ruin your performance if you don't have
sufficient ARC memory/resources), which is why many people do not enable it
unless they've made a serious conscious decision to do so.

 

I ran ZFS on a 1--2 GB RAM FreeBSD/i386 system for years and it was stable.
I have to tune KVM and restrict ARC RAM consumption, but once I did that I
had no problems.  It's my experience that ZFS is more stable and tested on
FreeBSD/amd64.

 

Cheers,

 

Paul.

 

 

> 

> 

> Walter

> 

> 

> 

>> On 3/1/2018 1:49 AM, Walter Parker wrote:

>> 

>>> Forgot to CC the list.

>>> 

>>> On Wed, Feb 28, 2018 at 10:13 PM, Walter Parker <
 walt...@gmail.com>

>>> wrote:

>>> 

>>> Thank you for the backup script.

 

 By my calculations, 2G should be enough. If I limit the ARC cache 

 to 1G, that leaves 1G for applications & kernel memory. As I'm not 

 serving the 6TB drive up as a file server, but using it for one 

 specific task (to receive the backups from one host) I figure that 

 I don't need lots of memory. ZFS as a quick file server or busy 

 server needs lots of memory to be quick.

 I've seen testing showing ZFS doing fast file copies on as little 

 as 768M total system after proper memory tuning.

 

 I need ZFS because it is the only file system that can receive 

 incremental ZFS snapshots and apply them. I have not setup the ZFS 

 backup software yes, so I'm just using rsnapshot. First time it 

 ran, it filled all 1G of the cache. I rebooted the firewall 

 afterwards and now ZFS with 60-100M of usage (the amount of data 

 that rsync updates on a daily basis is pretty small).

 Right now, the data from the other server is ~8.8G, compressed to 

 1.7G with lz4.

 

 When I get the full backup running, I will be ~1.5TB in size. ZFS 

 snapshots should be pretty small and quick (as it can send just the 

 data that was updated without having to walk the entire 

 filesystem). An rsync backup would have to walk the whole system to
find all of the changes.

 Most

 of the data on the system doesn't change (as it is a media library).

 

 I'll post back more results if people are interested, after I get 

 the backup software working (I'm thinking about using ZapZend).

 

 

 Walter

 

 

 

 On Wed, Feb 28, 2018 at 8:54 PM, ED Fochler 

 <  soek...@liquidbinary.com>

 wrote:

 

 I feel like I'm late in responding to this, but I have to say that 

 2GB of

> RAM doesn't seem like nearly enough for a 6TB zfs volume.  ZFS is 

> great in a lot of ways, but is a RAM consuming monster.  For 

> something RAM limited like the 2220 I'd use a different, simpler 

> file format.  Then I'd use rsync based snapshots.

> 

> Here's my personal backup script.  :-)  I haven't tried it FROM 

> pfsense, but I've used it to back up pfsense.

> 

> ED.

> 

> 

> 

> 

> 

> On 2018, Feb 21, at 12:23 PM, Walter Parker <
 walt...@gmail.com> wrote:

>> 

>> Hi,

>> 

>> I have 2.4.2 installed on an SG-2220 from Netgate [nice box]. I 

>> just

>> 

> bought

>>