Asus Sabertooth Z77 and FreeBSD?

2013-06-26 Thread Dan Naumov
Hello list

Does anyone here have any experience with the Asus Sabertooth Z77
motherboard? How well does it work with FreeBSD and is all the hardware
supported (including both SATA controllers)?

Thanks!

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


using freebsd-update to update jails and their host

2011-02-27 Thread Dan Naumov
I have a 8.0 host system with a few jails (using ezjail) that I am gearing
to update to 8.2. I have used freebsd-update a few times in the past to
upgrade a system between releases, but how I would I go about using it to
also upgrade a few jails made using ezjail? I would obviously need to point
freebsd-update to use /basejail as root which I assume isn't too hard, but
what about having it merge the new/changed /etc files in individual jails?

I've also discovered the ezjail-admin install -h file:// option which
installs a basejail using the host system as base, am I right in thinking I
could also use this by first upgrading my host and then running this command
to write the /basejail over with the updated files from the host to bring
them into sync? I still don't know how I would then fix the /etc under each
individual jail though.


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


How long do you go without upgrading FreeBSD to a newer release?

2010-05-16 Thread Dan Naumov
Hello folks

Just a thought/question that has recently come to my mind: How long do
you usually wait until upgrading to a newer release of FreeBSD? I am
sure there are lots of people who upgrade straight away, but what
about the opposite? What's your oldest currently running installation,
do you have any issues and are you planning on an upgrade or do you
intend to leave it running as is until some critical piece of hardware
breaks down, requiring a replacement?

The reason I am asking is: I have a 8.0 installation that I am VERY
happy with. It runs like clockwork. eveything is properly configured
and highly locked down, all services accessible to the outside world
are running inside ezjail-managed jails on top of ZFS, meaning it's
also very trivial to restore jails via snapshots, should the need ever
arise. I don't really see myself NEEDING to upgrade for many years.
even long after security updates stop being made for 8.0, since I can
see myself being able to at least work my way around arising security
issues with my configuration and to break into the real host OS and
cause real damage would mean you have to be either really really
dedicated, have a gun and know where I live or serve me with a
warrant.

Do you liva by the If it's not broken, don't fix it mantra or do you
religiously keep your OS installations up to date?


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: ZFS scheduling

2010-04-25 Thread Dan Naumov
Hi,

I noticed that my system gets very slow when I'm doing some simple but
intense ZFS operations. For example, I move about 20 Gigabytes of data
from one data set to another on the same pool, which is a RAIDZ of 3 500
GB SATA disks. The operations itself runs fast, but meanwhile other
things get really slow. E.g. opening a application takes 5 times as long
as before. Also simple operations like 'ls' stall for some seconds which
they did never before. It already changed a lot when I switched from
RAIDZ to a mirror with only 2 disks. Memory and CPU don't seem to be the
issue, I have a quad-core CPU and 8 GB RAM.

I can't get rid of the idea that this has something to do with
scheduling. The system is absolutely stable and fast. Somehow small I/O
operations on ZFS seem to have it very difficult to make it through when
other bigger ones are running. Maybe this has something to do with tuning?

I know my system information is very incomplete, and there could be a
lot of causes. But anybody knows if this could be an issue with ZFS itself?

Hello

As you do mention, your system information is indeed very incomplete,
making your problem rather hard to diagnose :)

Scheduling, in the traditional sense, is unlikely to be the cause of
your problems, but here's a few things you could look into:

First one is obviously the pool layout, heavy-duty writing on a pool,
consisting of a single raidz vdev is slow (slower than writing to a
mirror, as you already discovered), period. such is the nature of
raidz. Additionally, your problem is magnified by the fact that your
have reads competing with writes since you are reading (I assume) from
the same pool. One approach to alleviating the problem would be to
utilize a pool consisting of 2 or more raidz vdevs in a stripe, like
this:

pool
  raidz
disc1
disc2
disc3
  raidz
disc4
disc5
disc6

The second potential cause of your issues is the system wrongly
guesstimating your optimal TXG commit size. ZFS works in such a
fashion, that it commits data to disk in chunks. How big chunks it
writes at a time it tries to optimize by evaluating your pool IO
bandwidth over time and available RAM. The TXG commits happen with an
interval of 5-30 seconds. The worst case scenario is such, that if the
system misguesses the optimal TXG size, then under heavy write load,
it continues to defer the commit for up to the 30 second timeout and
when it hits the caps, it frantically commits it ALL at once. This can
and most likely will completely starve your read IO on the pool for as
long as the drives choke while committing the TXG.

If you are on 8.0-RELEASE, you could try playing with the
vfs.zfs.txg.timeout= variable in /boot/loader.conf, generally sane
values are 5-30, with 30 being the default. You could also try
adjusting vfs.zfs.vdev.max_pending= down from the default of 35 to a
lower value and see if that helps. AFAIK, 8-STABLE and -HEAD have a
systctl variable which directly allow you to manually set the
preferred TXG size and I've pretty sure I've seen some patches on the
mailing lists to add this functionality to 8.0.

Hope this helps.


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: version/revision control software for things mostly not source

2010-04-18 Thread Dan Naumov
On Sun, Apr 18, 2010 at 4:10 AM, Gene f...@brightstar.bomgardner.net wrote:
 On Sat, 17 Apr 2010 18:08:49 +0300, Dan Naumov wrote
 I think I am reaching the point where I want to have some kind of
 sane and easy to use version/revision control software for my
 various personal files and small projects. We are talking about
 varied kind of data, ranging from binary format game data (I have
 been doing FPS level design as a hobby for over a decade) to .doc
 office documents to ASCI text formatted game data. Most of the data
 is not plaintext. So far I have been using a hacked together mix of
 things, mostly a combination of essentially storing each revision of
 any given file a separate file001, file002, file003, etc which while
 easy to use and understand, seems rather space-inefficient and a
 little bit of ZFS snapshotting, however I want something better.


 Sadly, FreeBSD's ZFS doesn't have dedup or this functionality
 would've been easy to implement with my current hacked together methods.
 Performance does't matter all that much (unless we are talking
 something silly like a really crazy IO bottleneck), since the only
 expected user is just me and perhaps a few friends.

 Thanks!

 - Sincerely,
 Dan Naumov

 Someone else mentioned Subversion and Tortoisesvn. I use these tools for
 revision management of 600 or so powerpoints, graphics, and other
 miscellaneous files that we use for church services. Once up and running, it's
 simplicity itself. I also use websvn to allow read only access to individual
 files via a browser. I've found it works like a charm.


 ---
 IHN,
 Gene

I've looked at SVN and it looks reasonably easy to grok, but reading
the Version Control with Subversion book... it seems there is no
actual way to truly erase/delete/destoy/purge a part of an existing
repository? This sounds rather weird and annoying. What if I decide
that project XYZ is beyond redemption and abandon it, I delete the
working copy of it, but all history is still in there, gigabytes upon
gigabytes of data. With no way to remove it, it sounds like a really
big limitation.


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


version/revision control software for things mostly not source

2010-04-17 Thread Dan Naumov
I think I am reaching the point where I want to have some kind of sane
and easy to use version/revision control software for my various
personal files and small projects. We are talking about varied kind of
data, ranging from binary format game data (I have been doing FPS
level design as a hobby for over a decade) to .doc office documents to
ASCI text formatted game data. Most of the data is not plaintext. So
far I have been using a hacked together mix of things, mostly a
combination of essentially storing each revision of any given file a
separate file001, file002, file003, etc which while easy to use and
understand, seems rather space-inefficient and a little bit of ZFS
snapshotting, however I want something better.

What would be examples of good version control software for me? The
major things I want are: a simple and easy to use Windows GUI client
for my workstation, so I can quickly browse through different
projects, go back to any given point in time and view/checkout the
data of that point to a Windows machine. Space efficiency, while not
critical (the server has 2 x 2TB drives in RAID1 and can easily be
expanded down the line should the need eventually arise) is obviously
an important thing to have, surely even with binary data some space
can be saved if you have 20 versions of the same file with minor
changes.

Sadly, FreeBSD's ZFS doesn't have dedup or this functionality would've
been easy to implement with my current hacked together methods.
Performance does't matter all that much (unless we are talking
something silly like a really crazy IO bottleneck), since the only
expected user is just me and perhaps a few friends.

Thanks!

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: boot loader too large

2010-04-17 Thread Dan Naumov
Hey

A 64kb freebsd-boot partition should be more than plenty for what you
want to do, see my setup at: http://freebsd.pastebin.com/QS6MnNKc

If you want to setup a ZFS boot/root configuration and make your life
easier, just use the installation script provided by the guy who wrote
ManageBE: 
http://anonsvn.h3q.com/projects/freebsd-patches/browser/manageBE/create-zfsboot-gpt_livecd.sh

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Nethogs or similar for FreeBSD?

2010-04-10 Thread Dan Naumov
Is there something like Nethogs for FreeBSD that would show bandwidth
use broken down by PID, user and such in a convinient easy to view and
understand fashion?

http://nethogs.sourceforge.net/nethogs.png


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Nethogs or similar for FreeBSD?

2010-04-10 Thread Dan Naumov
On Sat, Apr 10, 2010 at 10:44 PM, mikel king mikel.k...@olivent.com wrote:

 On Apr 10, 2010, at 3:14 PM, Dan Naumov wrote:

 Is there something like Nethogs for FreeBSD that would show bandwidth
 use broken down by PID, user and such in a convinient easy to view and
 understand fashion?

 http://nethogs.sourceforge.net/nethogs.png


 - Sincerely,
 Dan Naumov

 Looks interesting, and pretty light weight only requiring ncurses and
 libpcap. I wonder how hard it'd be to compile it and get it running.
 What version of FreeBSD are you running?
 Cheers,
 Mikel King

I am on 8.0/amd64. From the website of Nethogs: Since NetHogs heavily
relies on /proc, it currently runs on Linux only.


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


bandwidth throttling?

2010-04-08 Thread Dan Naumov
Hello folks

I have a 8.0 system that has 2 IPs:

ifconfig em1
em1: flags=8843UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST metric 0 mtu 1500
options=19bRXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4
ether 00:25:90:01:32:93
inet 192.168.1.126 netmask 0xff00 broadcast 192.168.1.255
inet 192.168.1.127 netmask 0xff00 broadcast 192.168.1.255
media: Ethernet autoselect (1000baseT full-duplex)
status: active

The .126 is used by the host for various obvious things and I have a
jail on the same machine running off the .127 IP. Is there a quick and
easy way to have the jail host throttle bandwidth usage of everything
going to and out of the .127 jail? I don't really need anything fancy,
I just want to set hard limits for the entire jail globally, like
don't use more than 500KB/s downstream and more than 150KB/s
upstream. What would be the best way around doing this? My
understanding is that to do this with PF, I would need ALTQ meaning I
have to use a custom kernel and that IPFW with dummynet should have
similar functionality but should also work with GENERIC?

Thanks!

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: bizarre mount_nullfs issue with jails / ezjail

2010-04-07 Thread Dan Naumov
 An additional question: how come sade and sysinstall which are run
 inside the jail can see (and I can only assume they can also operate
 on and damage) the real underlying disks of the host?


 Disks (as well as others you have in your host's /dev) aren't visible
 inside jails.

Well, somehow they are on my system.

I guess I should've also clarified that the jail was installed using
ezjail and not completely manually

From /usr/local/etc/ezjail/semipublic

export jail_semipublic_devfs_enable=YES
export jail_semipublic_devfs_ruleset=devfsrules_jail

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: bizarre mount_nullfs issue with jails / ezjail

2010-04-07 Thread Dan Naumov
On Wed, Apr 7, 2010 at 10:10 AM, Aiza aiz...@comclark.com wrote:
 Dan Naumov wrote:

 An additional question: how come sade and sysinstall which are run
 inside the jail can see (and I can only assume they can also operate
 on and damage) the real underlying disks of the host?

 Disks (as well as others you have in your host's /dev) aren't visible
 inside jails.

 Well, somehow they are on my system.

 I guess I should've also clarified that the jail was installed using
 ezjail and not completely manually

 From /usr/local/etc/ezjail/semipublic

 export jail_semipublic_devfs_enable=YES
 export jail_semipublic_devfs_ruleset=devfsrules_jail

 - Sincerely,
 Dan Naumov


 You are not in a jail but as the host. Use ezjail-admin console jailname and
 things will look alot different. What you are playing with are ezjails
 system control files.

No, I am not, I am running sade / sysinstall INSIDE THE JAIL (AFTER
ezjail-admin console jailname or after connecting to the jail via
ssh).


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: Preventing Bad SMB Mount From Stalling A Boot

2010-04-06 Thread Dan Naumov
I mount my SMB shares from /etc/fstab on a FBSD 8.x production machine like
this:

 //USER at WINSERVER/SHARE   /mountpointsmbfs   rw  0   0

The problem is that after an outage, WINSERVER doesn't come up
before the FBSD machine. So, the FBSD machine tries to boot and then
hangs permanently because it cannot get the SMB share points mounted.
This recently happened after a catastrophic power outage that cooked
the share info on WINSERVER. Even after it came up, it was no longer
serving the proper shares and the FBSD machine could never find the
SMB shares and thus hung permanently.

The SMB mounts are not essential for systems operations. Is there a
way to tell the FBSD to try and mount SMB, but keep going and complete
the boot if it cannot?

A bit of an ugly hack, but have you considered attempting to mount the
share via an automatic script after the system has finished booting?

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


bizarre mount_nullfs issue with jails / ezjail

2010-04-06 Thread Dan Naumov
So, I want the basejail to only contain the world and link the ports
tree from the host into each individual jail when it's time to update
the ports inside them, but I am running into a bit of a bizarre issue:
I can mount_nullfs /usr/ports elsewhere on the host just fine, but it
doesn't work if I try to mount_nullfs it to /usr/ports inside the
jail:

mount_nullfs /usr/ports/ /usr/ports2

df -H | grep ports
cerberus/usr-ports34G241M 34G 1%/usr/ports
cerberus/usr-ports-distfiles  34G  0B 34G 0%
/usr/ports/distfiles
cerberus/usr-ports-packages   34G  0B 34G 0%
/usr/ports/packages
/usr/ports34G241M 34G 1%/usr/ports2

mount | grep ports
cerberus/usr-ports on /usr/ports (zfs, local)
cerberus/usr-ports-distfiles on /usr/ports/distfiles (zfs, local)
cerberus/usr-ports-packages on /usr/ports/packages (zfs, local)
/usr/ports on /usr/ports2 (nullfs, local)

mount_nullfs /usr/ports/ /usr/jails/semipublic/usr/ports
mount_nullfs: /basejail: No such file or directory

What is going on here? I also note that the error actually wants a
/basejail on the host, which is even more bizarre:

mount_nullfs /usr/ports/ /usr/jails/semipublic/usr/ports
mount_nullfs: /basejail: No such file or directory

mkdir /basejail

mount_nullfs /usr/ports/ /usr/jails/semipublic/usr/ports
mount_nullfs: /basejail/usr: No such file or directory

Yet, this works:

mkdir /usr/jails/semipublic/test
mount_nullfs /usr/ports/ /usr/jails/semipublic/test
umount /usr/jails/semipublic/test

Any ideas?


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: bizarre mount_nullfs issue with jails / ezjail

2010-04-06 Thread Dan Naumov
On Wed, Apr 7, 2010 at 12:37 AM, Glen Barber glen.j.bar...@gmail.com wrote:
 Hi Dan,

 Dan Naumov wrote:
 So, I want the basejail to only contain the world and link the ports
 tree from the host into each individual jail when it's time to update
 the ports inside them, but I am running into a bit of a bizarre issue:
 I can mount_nullfs /usr/ports elsewhere on the host just fine, but it
 doesn't work if I try to mount_nullfs it to /usr/ports inside the
 jail:

 mount_nullfs /usr/ports/ /usr/ports2

 df -H | grep ports
 cerberus/usr-ports                34G    241M     34G     1%    /usr/ports
 cerberus/usr-ports-distfiles      34G      0B     34G     0%
 /usr/ports/distfiles
 cerberus/usr-ports-packages       34G      0B     34G     0%
 /usr/ports/packages
 /usr/ports                        34G    241M     34G     1%    /usr/ports2

 mount | grep ports
 cerberus/usr-ports on /usr/ports (zfs, local)
 cerberus/usr-ports-distfiles on /usr/ports/distfiles (zfs, local)
 cerberus/usr-ports-packages on /usr/ports/packages (zfs, local)
 /usr/ports on /usr/ports2 (nullfs, local)

 mount_nullfs /usr/ports/ /usr/jails/semipublic/usr/ports
 mount_nullfs: /basejail: No such file or directory

 What is going on here? I also note that the error actually wants a
 /basejail on the host, which is even more bizarre:

 mount_nullfs /usr/ports/ /usr/jails/semipublic/usr/ports
 mount_nullfs: /basejail: No such file or directory

 mkdir /basejail

 mount_nullfs /usr/ports/ /usr/jails/semipublic/usr/ports
 mount_nullfs: /basejail/usr: No such file or directory

 Yet, this works:

 mkdir /usr/jails/semipublic/test
 mount_nullfs /usr/ports/ /usr/jails/semipublic/test
 umount /usr/jails/semipublic/test

 Any ideas?



 The ports directory in an ezjail is a link to /basejail/usr/ports (in the
 jail).

 Breaking the link (from the host) allows the mount to work successfully.

 orion# ll usr/ports
 lrwxr-xr-x  1 root  wheel  19 Mar  8 18:06 usr/ports - /basejail/usr/ports
 orion# unlink usr/ports
 orion# mkdir usr/ports
 orion# mount_nullfs /usr/ports usr/ports
 orion#

 Regards,

 --
 Glen Barber

Thanks for the tip.

An additional question: how come sade and sysinstall which are run
inside the jail can see (and I can only assume they can also operate
on and damage) the real underlying disks of the host?

- Sincerely
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Intel D945GSE vs Zotac ION ITX (was: Support for Zotac MB with nVidia ION chipset)

2010-04-05 Thread Dan Naumov
On Mon, Apr 5, 2010 at 1:20 PM, Jeremie Le Hen jere...@le-hen.org wrote:
 Nonetheless I'm a little worried by what you said about the lack of ECC.
 Computers has been used for years before ECC came out and obviously they
 worked :).  Do you really think it might happen to be a problem?  Would
 an Intel board would compensate for this?  Dan, have you ever
 experienced weird problems that could be explained by bitflips?

Personally, I haven't had any issues, but then again on the ZFS scale
of things, both my current pool size (2 TB) and projected pool size
when I add more disks (6 TB) is pretty small. If this was a heavily
used machine with a 10 TB pool or bigger, I would definately give
strong consideration to ECC.

 For the records, I've found an interesting and very recent post about
 someone running OpenSolaris on this Supermicro motherboard [1].  He uses
 a thumbdrive for the operating system and with four drives connected
 onto it, the whole system sucks 41 watts when idle (27 without any HDD,
 which is twice as the Intel D945GSE

The power draw (from the wall) for the Supermicro X7SPA-H without any
disks attached is as following:

26W - During boot.
24W - IDLE at console
28W - Full load

This is with a 80+ rated Corsair 400CX PSU. Sadly, I did not have the
opportunity to measure the power draw with powerd enabled. The D945GSE
is unsuitable for use as a ZFS NAS due to it's severe feature
limitations when compared against the X7SPA-H, of biggest concern
would be the limitation of RAM, followed by the amount of native SATA
ports, followed by the fact that you only get a PCI-E x1 (both
physical formfactor and speed-wise) slot for expansion, while most
controller cards are either 4x or 8x, meaning they simply wouldn't
physically fit into the slot.

Singlecore 1,6Ghz Diamondville Atom VS Dualcore 1,66Ghz Pineview Atom
1 RAM socket supporting a max of 1GB VS 2 RAM sockets supporting a max
of 4GB (note that X7SPA-H uses SO-DIMMs, not regular DIMMs)
2 SATA ports vs 6 SATA ports
1 Realtec NIC vs 2 x Intel NIC
PCI-E x1 Slot VS PCI-E x4 Slot (in x16 form factor) for expansion


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: Intel D945GSE vs Zotac ION ITX (was: Support for Zotac MB with nVidia ION chipset)

2010-04-04 Thread Dan Naumov
Just a small comment regarding Atom suitability for a home NAS: feel
free to completely ignore people saying that ZFS overhead is too much
for an Atom to handle efficiently, they have no idea what they are
talking about. I am using a Supermicro X7SPA-H board (Atom D510) and I
an easily achieving ~85mb/s transfers over Samba to and from the
machine. 85mb/s is also the best these drives will do and my CPU is
nowhere near maxed during these transfers, so with better disks I
would be easily saturating gigabit, while still having plenty of
available CPU time. What you want is a good disk controller and fast
and reliable disks, 2gb RAM is enough, but with 4gb ram you can
basically safely enable prefetch for a very noticable boost in
sequential pattern reads. Below are some numbers from my personal Atom
NAS system:

===
bonnie -s 8192

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
 8192 29065 68.9 52027 39.8 39636 33.3 54057 95.4
105335 34.6 174.1  7.9

dd if=/dev/zero of=test1 bs=1M count=8192
8589934592 bytes transferred in 111.300481 secs (77177875 bytes/sec) (73,6mb/s)

dd if=/dev/urandom of=test2 bs=1M count=8192
dd if=test2 of=/dev/zero bs=1M
8589934592 bytes transferred in 76.031399 secs (112978779 bytes/sec)
(107,74mb/s)
===

This is a ZFS mirror of 2 x 2tb WD Green drives with 32mb cache with
the automatic headparking disabled via WDIDLE3. The drives are very
cheap and hence, are the bottleneck in my case.


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


tuning vfs.zfs.vdev.max_pending and solving the issue of ZFS writes choking read IO

2010-03-24 Thread Dan Naumov
Hello

I am having a slight issue (and judging by Google results, similar
issues have been seen by other FreeBSD and Solaris/OpenSolaris users)
with writes choking the read IO. The issue I am having is described
pretty well here:
http://opensolaris.org/jive/thread.jspa?threadID=106453 It seems that
under heavy write load, ZFS likes to aggregate a really huge amount of
data before actually writing it to disks, resulting in sudden 10+
second stalls where it frantically tries to commit everything,
completely choking read IO in the process and sometimes even the
network (with a large enough write to a mirror pool using DD, I can
cause my SSH sessions to drop dead, without actually running out of
RAM. As soon as the data is committed, I can reconnect back).

Beyond the issue of system interactivity (or rather, the
near-disappearance thereof) during these enormous flushes, this kind
of pattern seems really ineffective from the CPU utilization point of
view. Instead of a relatively stable and consistent flow of reads and
writes, allowing the CPU to be utilized as much as possible, when the
system is committing the data the CPU basically stays IDLE for 10+
seconds (or as long as the flush takes) and the process of committing
unwritten data to the pool seemingly completely trounces the priority
of any read operations.

Has anyone done any extensive testing of the effects of tuning
vfs.zfs.vdev.max_pending on this issue? Is there some universally
recommended value beyond the default 35? Anything else I should be
looking at?


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: 12 TB Disk In freebsd AMD 64 ?

2010-03-22 Thread Dan Naumov
MBR can only work with 2TB volumes, however, we are no longer limited
to MBR. With GPT, we can have really really big volumes. That being
said, I really don't think you should be using a single 12TB volume
with UFS, even if you have underlying redundancy provided by a
hardware raid device. Have you ever had to fsck a 2TB volume or
bigger? It's not fun. My recommendation would be to use ZFS. Use it to
manage your array and filesystems and use it on top of individual raw
disk devices, if you must use your raid controller, use it in JBOD
mode.

If you want a relatively technical introduction to ZFS and why it's
good for you, read up here:
http://www.slideshare.net/relling/zfs-tutorial-usenix-june-2009

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


sftp server with speed throttling

2010-03-21 Thread Dan Naumov
What are my options if I want to run an sftp server with speed
throttling? My understanding is that openssh (which includes sftp) in
base does not support this directly, so I would have to either use a
custom kernel with ALTQ (and I would really rather stick to GENERIC so
I can use freebsd-update) which sounds like a bit too much
configuration work or pass sftp traffic through PF and throttle it
(ugly, would also affect ssh traffic).

Are there any sftp servers with directly built-in functionality for
this? I just would to be able to set limits for upload speed globally
for the entire server and preferably to also be able to do speed
settings on a per-user basis.

Thanks.

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Samba read speed performance tuning

2010-03-20 Thread Dan Naumov
On Sat, Mar 20, 2010 at 3:49 AM, Gary Gatten ggat...@waddell.com wrote:
 It MAY make a big diff, but make sure during your tests you use unique files 
 or flush the cache or you'll me testing cache speed and not disk speed.

Yeah I did make sure to use unique files for testing the effects of
prefetch. This is Atom D510 / Supermicro X75SPA-H / 4Gb Ram with 2 x
slow 2tb WD Green (WD20EADS) disks with 32mb cache in a ZFS mirror
after enabling prefetch.:
Code:

bonnie -s 8192

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
 8192 29065 68.9 52027 39.8 39636 33.3 54057 95.4
105335 34.6 174.1 7.9

DD read:
dd if=/dev/urandom of=test2 bs=1M count=8192
dd if=test2 of=/dev/zero bs=1M
8589934592 bytes transferred in 76.031399 secs (112978779 bytes/sec)
(107,74mb/s)


Individual disks read capability: 75mb/s
Reading off a mirror of 2 disks with prefetch disabled: 60mb/s
Reading off a mirror of 2 disks with prefetch enabled: 107mb/s


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Samba read speed performance tuning

2010-03-19 Thread Dan Naumov
On a FreeBSD 8.0-RELEASE/amd64 system with a Supermicro X7SPA-H board
using an Intel gigabit nic with the em driver, running on top of a ZFS
mirror, I was seeing a strange issue. Local reads and writes to the
pool easily saturate the disks with roughly 75mb/s throughput, which
is roughly the best these drives can do. However, working with Samba,
writes to a share could easily pull off 75mb/s and saturate the disks,
but reads off a share were resulting in rather pathetic 18mb/s
throughput.

I found a threadon the FreeBSD forums
(http://forums.freebsd.org/showthread.php?t=9187) and followed the
suggested advice. I rebuilt Samba with AIO support, kldloaded the aio
module and made the following changes to my smb.conf

From:
socket options=TCP_NODELAY

To:
socket options=SO_RCVBUF=131072 SO_SNDBUF=131072 TCP_NODELAY
min receivefile size=16384
use sendfile=true
aio read size = 16384
aio write size = 16384
aio write behind = true
dns proxy = no[/CODE]

This showed a very welcome improvement in read speed, I went from
18mb/s to 48mb/s. The write speed remained unchanged and was still
saturating the disks. Now I tried the suggested sysctl tunables:

atombsd# sysctl net.inet.tcp.delayed_ack=0
net.inet.tcp.delayed_ack: 1 - 0

atombsd# sysctl net.inet.tcp.path_mtu_discovery=0
net.inet.tcp.path_mtu_discovery: 1 - 0

atombsd# sysctl net.inet.tcp.recvbuf_inc=524288
net.inet.tcp.recvbuf_inc: 16384 - 524288

atombsd# sysctl net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.recvbuf_max: 262144 - 16777216

atombsd# sysctl net.inet.tcp.sendbuf_inc=524288
net.inet.tcp.sendbuf_inc: 8192 - 524288

atombsd# sysctl net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.sendbuf_max: 262144 - 16777216

atombsd# sysctl net.inet.tcp.sendspace=65536
net.inet.tcp.sendspace: 32768 - 65536

atombsd# sysctl net.inet.udp.maxdgram=57344
net.inet.udp.maxdgram: 9216 - 57344

atombsd# sysctl net.inet.udp.recvspace=65536
net.inet.udp.recvspace: 42080 - 65536

atombsd# sysctl net.local.stream.recvspace=65536
net.local.stream.recvspace: 8192 - 65536

atombsd# sysctl net.local.stream.sendspace=65536
net.local.stream.sendspace: 8192 - 65536

This improved the read speeds a further tiny bit, now I went from
48mb/s to 54mb/s. This is it however, I can't figure out how to
increase Samba read speed any further. Any ideas?
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Samba read speed performance tuning

2010-03-19 Thread Dan Naumov
On Fri, Mar 19, 2010 at 11:14 PM, Dan Naumov dan.nau...@gmail.com wrote:
 On a FreeBSD 8.0-RELEASE/amd64 system with a Supermicro X7SPA-H board
 using an Intel gigabit nic with the em driver, running on top of a ZFS
 mirror, I was seeing a strange issue. Local reads and writes to the
 pool easily saturate the disks with roughly 75mb/s throughput, which
 is roughly the best these drives can do. However, working with Samba,
 writes to a share could easily pull off 75mb/s and saturate the disks,
 but reads off a share were resulting in rather pathetic 18mb/s
 throughput.

 I found a threadon the FreeBSD forums
 (http://forums.freebsd.org/showthread.php?t=9187) and followed the
 suggested advice. I rebuilt Samba with AIO support, kldloaded the aio
 module and made the following changes to my smb.conf

 From:
 socket options=TCP_NODELAY

 To:
 socket options=SO_RCVBUF=131072 SO_SNDBUF=131072 TCP_NODELAY
 min receivefile size=16384
 use sendfile=true
 aio read size = 16384
 aio write size = 16384
 aio write behind = true
 dns proxy = no[/CODE]

 This showed a very welcome improvement in read speed, I went from
 18mb/s to 48mb/s. The write speed remained unchanged and was still
 saturating the disks. Now I tried the suggested sysctl tunables:

 atombsd# sysctl net.inet.tcp.delayed_ack=0
 net.inet.tcp.delayed_ack: 1 - 0

 atombsd# sysctl net.inet.tcp.path_mtu_discovery=0
 net.inet.tcp.path_mtu_discovery: 1 - 0

 atombsd# sysctl net.inet.tcp.recvbuf_inc=524288
 net.inet.tcp.recvbuf_inc: 16384 - 524288

 atombsd# sysctl net.inet.tcp.recvbuf_max=16777216
 net.inet.tcp.recvbuf_max: 262144 - 16777216

 atombsd# sysctl net.inet.tcp.sendbuf_inc=524288
 net.inet.tcp.sendbuf_inc: 8192 - 524288

 atombsd# sysctl net.inet.tcp.sendbuf_max=16777216
 net.inet.tcp.sendbuf_max: 262144 - 16777216

 atombsd# sysctl net.inet.tcp.sendspace=65536
 net.inet.tcp.sendspace: 32768 - 65536

 atombsd# sysctl net.inet.udp.maxdgram=57344
 net.inet.udp.maxdgram: 9216 - 57344

 atombsd# sysctl net.inet.udp.recvspace=65536
 net.inet.udp.recvspace: 42080 - 65536

 atombsd# sysctl net.local.stream.recvspace=65536
 net.local.stream.recvspace: 8192 - 65536

 atombsd# sysctl net.local.stream.sendspace=65536
 net.local.stream.sendspace: 8192 - 65536

 This improved the read speeds a further tiny bit, now I went from
 48mb/s to 54mb/s. This is it however, I can't figure out how to
 increase Samba read speed any further. Any ideas?


Oh my god... Why did noone tell me how much of an enormous performance
boost vfs.zfs.prefetch_disable=0 (aka actually enabling prefetch) is.
My local reads off the mirror pool jumped from 75mb/s to 96mb/s (ie.
they are now nearly 25% faster than reading off an individual disk)
and reads off a Samba share skyrocketed from 50mb/s to 90mb/s.

By default, FreeBSD sets vfs.zfs.prefetch_disable to 1 on any i386
systems and on any amd64 systems with less than 4GB of avaiable
memory. My system is amd64 with 4gb ram, but integrated video eats
some of that, so the autotuning disabled the prefetch. I had read up
on it and a fair amount of people seemed to have performance issues
caused by having prefetch enabled and get better results with it
turned off, in my case however, it seems that enabling it gave a
really solid boost to performance.


- Sincerely
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Some questions about vfs.zfs.prefetch_disable=1 and ZFS filesystem versions

2010-03-15 Thread Dan Naumov
After looking at the arc_summary.pl script (found at
http://jhell.googlecode.com/files/arc_summary.pl), I have realized
that my system has set vfs.zfs.prefetch_disable=1 by default, looking
at dmesg, I see:

=
ZFS NOTICE: Prefetch is disabled by default if less than 4GB of RAM is present;
to enable, add vfs.zfs.prefetch_disable=0 to /boot/loader.conf.
=

...except I do have 4gb of RAM. Is this caused by integrated GPU
snatching some of my memory at boot? From dmesg:

=
real memory  = 4294967296 (4096 MB)
avail memory = 4088082432 (3898 MB)
=

What kind of things does this tunable affect and how much of a
performance impact does enabling / disabling it have? Should I
manually enable it?

I've also noticed a really weird inconsistency, my dmesg says the following:

=
ZFS filesystem version 13
ZFS storage pool version 13
=

Yet:

=
zfs get version
NAME  PROPERTY  VALUE SOURCE
cerberus  version   3 -
cerberus/DATA version   3 -
cerberus/ROOT version   3 -
cerberus/ROOT/cerberusversion   3 -
cerberus/home version   3 -
cerberus/home/atombsd version   3 -
cerberus/home/frictionversion   3 -
cerberus/home/jagoversion   3 -
cerberus/home/karni   version   3 -
cerberus/tmp  version   3 -
cerberus/usr-localversion   3 -
cerberus/usr-obj  version   3 -
cerberus/usr-portsversion   3 -
cerberus/usr-ports-distfiles  version   3 -
cerberus/usr-src  version   3 -
cerberus/var  version   3 -
cerberus/var-db   version   3 -
cerberus/var-log  version   3 -
cerberus/var-tmp  version   3 -
=

Is this normal or should zfs get version also show version 13? This
is on a system with the pool and filesystems created with 8.0-RELEASE,
by the way.

Thanks!

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Some questions about vfs.zfs.prefetch_disable=1 and ZFS filesystem versions

2010-03-15 Thread Dan Naumov
Nevermind the question about ZFS filesystem versions, I should've
Googled more throughly and read Pawel's responce to this question
before (answer: dmesg picks the filesystem version wrong, it IS and
supposed to be v3). I am still curious about prefetch though.


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


powerd on 8.0, is it considered safe?

2010-03-08 Thread Dan Naumov
Hello

Is powerd finally considered stable and safe to use on 8.0? At least
on 7.2, it consistently caused panics when used on Atom systems with
Hyper-Threading enabled, but I recall that Attilio Rao was looking
into it.


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: powerd on 8.0, is it considered safe?

2010-03-08 Thread Dan Naumov
Okay, now I am baffled.

Up until this point, I wasn't using powerd on this new Atom D510
system. I ran sysctl and noticed that dev.cpu.0.freq: is actually 1249
and doesn't change no matter what kind of load the system is under. If
I boot to BIOS, under BIOS CPU is shown as 1,66 Ghz. Okayy... I guess
this explains why my buildworld and buildkernel took over 5 hours if
by default, it gets stuck at 1249 Mhz for no obvious reason. I enabled
powerd and now according to dev.cpu.0.freq:, the system is permanently
stuck at 1666 Mhz, regardless of whether the system is under load or
not.

atombsd# uname -a
FreeBSD atombsd.localdomain 8.0-RELEASE-p2 FreeBSD 8.0-RELEASE-p2 #0:
Tue Jan  5 21:11:58 UTC 2010
r...@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64

atombsd# kenv | grep smbios.planar.product
smbios.planar.product=X7SPA-H

atombsd# sysctl dev.cpu dev.est dev.cpufreq dev.p4tcc debug.cpufreq
kern.timecounter
dev.cpu.0.%desc: ACPI CPU
dev.cpu.0.%driver: cpu
dev.cpu.0.%location: handle=\_PR_.P001
dev.cpu.0.%pnpinfo: _HID=none _UID=0
dev.cpu.0.%parent: acpi0
dev.cpu.0.freq: 1666
dev.cpu.0.freq_levels: 1666/-1 1457/-1 1249/-1 1041/-1 833/-1 624/-1
416/-1 208/-1
dev.cpu.0.cx_supported: C1/0
dev.cpu.0.cx_lowest: C1
dev.cpu.0.cx_usage: 100.00% last 500us
dev.cpu.1.%desc: ACPI CPU
dev.cpu.1.%driver: cpu
dev.cpu.1.%location: handle=\_PR_.P002
dev.cpu.1.%pnpinfo: _HID=none _UID=0
dev.cpu.1.%parent: acpi0
dev.cpu.1.cx_supported: C1/0
dev.cpu.1.cx_lowest: C1
dev.cpu.1.cx_usage: 100.00% last 500us
dev.cpu.2.%desc: ACPI CPU
dev.cpu.2.%driver: cpu
dev.cpu.2.%location: handle=\_PR_.P003
dev.cpu.2.%pnpinfo: _HID=none _UID=0
dev.cpu.2.%parent: acpi0
dev.cpu.2.cx_supported: C1/0
dev.cpu.2.cx_lowest: C1
dev.cpu.2.cx_usage: 100.00% last 500us
dev.cpu.3.%desc: ACPI CPU
dev.cpu.3.%driver: cpu
dev.cpu.3.%location: handle=\_PR_.P004
dev.cpu.3.%pnpinfo: _HID=none _UID=0
dev.cpu.3.%parent: acpi0
dev.cpu.3.cx_supported: C1/0
dev.cpu.3.cx_lowest: C1
dev.cpu.3.cx_usage: 100.00% last 500us
sysctl: unknown oid 'dev.est'

Right. So how do I investigate why does the CPU get stuck at 1249 Mhz
after boot by default when not using powerd and why it gets stuck at
1666 Mhz with powerd enabled and doesn't scale back down when IDLE?
Out of curiosity, I stopped powerd but the CPU remained at 1666 Mhz.


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: powerd on 8.0, is it considered safe?

2010-03-08 Thread Dan Naumov
OK, now I feel a bit stupid. The second half of my PR at
http://www.freebsd.org/cgi/query-pr.cgi?pr=144551 (anything related to
powerd behaviour) can be ignored. For testing purposes, I started
powerd in the foreground and observed it's behaviour. It works exactly
as advertised and apparently the very act of issuing a sysctl -a |
grep dev.cpu.0.freq command uses up a high % of CPU time for a
fraction of a second, resulting in confusing output, I was always
getting the highest cpu frequency state as the output. Testing powerd
in foreground however, shows correct behaviour, CPU is downclocked
both before and after issuing that command :)

Still doesn't explain why the system boots up at 1249 Mhz, but that's
not that big of an issue at this point now I see that powerd is
behaving correctly.

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


freebsd-update on a 8.0 rootzfs system

2010-03-07 Thread Dan Naumov
Hello folks

I have a 8.0 system that uses zfsroot and gptzfsboot. It uses the
GENERIC kernel and the only thing that had to be manually recompiled
is obviously the bootloader, to enable zfs boot support, other then
that, the system is using stock 8.0 binaries. Since fully rebuilding
world and kernel on this system is a 5 hour process, I would very much
like to use freebsd-update and I wanted someone to clarify the
utility's behaviour. If I run freebsd-update on this system, what will
it do when it detects that the bootloader binaries do not match those
of stock 8.0-RELEASE? Will it:

1) Ignore the changed/recompiled bootloader files completely, only
updating the binaries whose checksums it can recognize. This behaviour
is alright for updating within 8.0, updating for release errata, but
would cause some problems updating to 8.1 and further, since 8.1 will
have zfs capable bootloader by default and having freebsd-update
always completely ignore a system component that has once been
recompiled sounds a bit silly.

2) Happily update the system, overwrite my custom compiled bootloader,
forcing me to manually rebuild the bootloader again before I reboot
the system. This I guess would actually be the desired behaviour.


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: freebsd-update on a 8.0 rootzfs system

2010-03-07 Thread Dan Naumov
On Sun, Mar 7, 2010 at 1:57 PM, Dan Naumov dan.nau...@gmail.com wrote:
 Hello folks

 I have a 8.0 system that uses zfsroot and gptzfsboot. It uses the
 GENERIC kernel and the only thing that had to be manually recompiled
 is obviously the bootloader, to enable zfs boot support, other then
 that, the system is using stock 8.0 binaries. Since fully rebuilding
 world and kernel on this system is a 5 hour process, I would very much
 like to use freebsd-update and I wanted someone to clarify the
 utility's behaviour. If I run freebsd-update on this system, what will
 it do when it detects that the bootloader binaries do not match those
 of stock 8.0-RELEASE? Will it:

 1) Ignore the changed/recompiled bootloader files completely, only
 updating the binaries whose checksums it can recognize. This behaviour
 is alright for updating within 8.0, updating for release errata, but
 would cause some problems updating to 8.1 and further, since 8.1 will
 have zfs capable bootloader by default and having freebsd-update
 always completely ignore a system component that has once been
 recompiled sounds a bit silly.

 2) Happily update the system, overwrite my custom compiled bootloader,
 forcing me to manually rebuild the bootloader again before I reboot
 the system. This I guess would actually be the desired behaviour

OK, I did a testrun of this in a VM environment and #1 is what
happens. I tried freebsd-update IDS first and that showed that
/boot/loader SHA256 does not match what is expected, I then applied
the updates, but it ignored my custom /boot/loader anyway and didn't
touch it despite the mismatch. Why?

My biggest concern is what does this mean going forward, when the
eventual time for upgrading to 8.1 and 8.2 comes. 8.1 definately has a
changed bootloader. Does the current behaviour mean that when I
upgrade to 8.1, it will still refuse to update the bootloader and will
refuse to update it forever or will it actually update whatever is
given to 8.1, which would be the desired behaviour?

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: make make install accept defaults

2010-03-07 Thread Dan Naumov
Portmaster (ports-mgmt/portmaster) will help you do that.

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Automated kernel crash reporting system

2010-03-05 Thread Dan Naumov
On Fri, Mar 5, 2010 at 1:19 PM, Robert Watson rwat...@freebsd.org wrote:

 On Thu, 4 Mar 2010, sean connolly wrote:

 Automatic reporting would end up being a mess given that panics can be
 caused by hardware problems. Having an autoreport check if memtest was run
 before it reports, or having it only run with -CURRENTmight be useful.

I too, disagree with this. Surely most attention would be given to the
most often recurring problems across varied hardware. If a new
-RELEASE is tagged and suddenly there is an influx of very similar
automated crash reports across a wide selection of hardware, some
conclusions can be reached.


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Automated kernel crash reporting system

2010-03-04 Thread Dan Naumov
Hello

I noticed the following on the FreeBSD website:
http://www.freebsd.org/projects/ideas/ideas.html#p-autoreport Has
there been any progress/work done on the automated kernel crash
reporting system? The current ways of enabling and gathering the
information required by developers for investigating panics and
similar issues are unintuitive and user-hostile to say the least and
anything to automate the process would be a very welcome addition.


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


locale settings and displaying file names in multiple languages

2010-03-03 Thread Dan Naumov
Hello

I have a 8.0/amd64 system serving a few Samba shares. Windows clients
write files to some of these shares using multiple languages: english,
finnish and russian. When accessed from any given Windows client, the
file and directory names all look correct. However when accessing
these same files locally, the file- and directory names that utilize
russian and finnish languages are full of question marks, like this
for russian:

-rw-r--r--  1 nobody  nobody11M Feb 21  2008  ??
-rw-r--r--  1 nobody  nobody   9.2M Feb 21  2008 ??-??
-rw-r--r--  1 nobody  nobody   6.3M Feb 21  2008 ?? ...
-rw-r--r--  1 nobody  nobody   7.6M Feb 21  2008
 
-rw-r--r--  1 nobody  nobody   7.1M Feb 21  2008 ?? 
-rw-r--r--  1 nobody  nobody   7.7M Feb 21  2008 ??

and like this for finnish:

drwxr-xr-x2 nobody  nobody   13 Mar  2 03:20 Turmion K??til??t
- Hoitovirhe
drwxr-xr-x2 nobody  nobody7 Mar  2 03:20 Turmion K??til??t
- Niuva 20
drwxr-xr-x2 nobody  nobody   13 Mar  2 03:20 Turmion K??til??t
- Pirun Nyrkki
drwxr-xr-x2 nobody  nobody   12 Mar  2 03:20 Turmion K??til??t
- U.S.C.H.!

And operating on these files locally is tricky to say the least: for
example I cannot do a: cd  ?? for obvious
reasons, because there is no directory that REALLY has all those
question marks. However, I am still able to browse and operate on
these files using Midnight Commander, somehow it actually works. How
do I need to set the locale settings on the FreeBSD machine so that
all file names are displayed correctly when operated on locally?


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: RAID10 doen't boot

2010-03-02 Thread Dan Naumov
Hi,
I'd really appreciate it if somebody could help me out!
I have a box with a MB ASUS P5WDG2-WS Pro with two built-in SATA II RAID 
controllers
(Intel ICH7R and Marvell 88SE614x).
I installed 4 HDD WD WD5002ABYS (500GB each) on the 4 SATA ports of the Intel
ICH7R and using the Intel Matrix Storage Manager I created a RAID10 (Strip 
14KB, Size
931.5GB, Status Normal, Bootable Yes) out of these 4 HDD.
Then in the BIOS I set the IDE Configuration to Configure SATA As [RAID], 
OnBoard
Serial-ATA BOOTROM [Enabled] and disable the Marvell SATA controller, for I 
use only
the Intel ICH7R.
Then I successfully installed FreeBSD 7.1 on the RAID partition, ar0.

The installation completed successfully but the system persistently didn't 
boot and gave this
error message:

F1FreeBSD Default: F1 No /boot/loader FreeBSD/i386 boot Default: 0:ad
(0,a)/boot/kernel/kernel boot: No /boot/kernel/kernel

Thanks a lot,
Alex

Hello, I know that this probably doesn't help much, but I do recall
reading in several different places that support for Intel firmware
raid in both FreeBSD and Linux is very shaky and limited a best. Is
there any particular reason you want to use it instead of using ZFS or
gmirror/gstripe/graid5?


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: Compiler Flags problem with core2 CPU

2010-03-02 Thread Dan Naumov
See the section 3.17.14 Intel 386 and AMD x86-64 Options in the gcc
Info manual.  It contains a full list of the supported CPU-TYPE values
for the -mtune=CPU-TYPE option.  The -march=CPU-TYPE option accepts the
same CPU types:

`-march=CPU-TYPE'
 Generate instructions for the machine type CPU-TYPE.  The
 choices for CPU-TYPE are the same as for `-mtune'.  Moreover,
 specifying `-march=CPU-TYPE' implies `-mtune=CPU-TYPE'.


Hello

Out of curiosity, what is the optimal -march= value to use for the
new Atom D510 CPU: http://ark.intel.com/Product.aspx?id=43098 ?

Thanks


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance (fixed)

2010-02-27 Thread Dan Naumov
Hello folks

A few weeks ago, there was a discussion started by me regarding
abysmal read/write performance using ZFS mirror on 8.0-RELEASE. I was
using an Atom 330 system with 2GB ram and it was pointed out to me
that my problem was most likely having both disks attached to a PCI
SIL3124 controller, switching to the new AHCI drivers didn't help one
bit. To reitirate, here are the Bonnie and DD numbers I got on that
system:

===

Atom 330 / 2gb ram / Intel board + PCI SIL3124

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
 8192 21041 53.5 22644 19.4 13724 12.8 25321 48.5 43110 14.0 143.2 3.3

dd if=/dev/zero of=/root/test1 bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 143.878615 secs (29851325 bytes/sec) (28,4 mb/s)

===

Since then, I switched the exact same disks to a different system:
Atom D510 / 4gb ram / Supermicro X7SPA-H / ICH9R controller (native).
Here are the updated results:

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
 8192 30057 68.7 50965 36.4 27236 21.3 33317 58.0 53051 14.3 172.4  3.2

dd if=/dev/zero of=/root/test1 bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 54.977978 secs (78121594 bytes/sec) (74,5 mb/s)

===

Write performance now seems to have increased by a factor of 2 to 3
and is now definately in line with the expected performance of the
disks in question (cheap 2TB WD20EADS with 32mb cache). Thanks to
everyone who has offered help and tips!


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


8.0 on new hardware and a few errors, should I be worried?

2010-02-27 Thread Dan Naumov
Hello

I've very recently finished installing 8.0-RELEASE on some new
hardware and I noticed a few error messages that make me a bit uneasy.
This is a snip from my dmesg:

--
acpi0: SMCI  on motherboard
acpi0: [ITHREAD]
acpi0: Power Button (fixed)
acpi0: reservation of fee0, 1000 (3) failed
acpi0: reservation of 0, a (3) failed
acpi0: reservation of 10, bf60 (3) failed
--

What do these mean and should I worry about it? The full DMESG can be
viewed here: http://jago.pp.fi/temp/dmesg.txt

Additionally, while building a whole bunch of ports on this new system
(about 30 or so, samba, ncftp, portaudit, bash, the usual suspects), I
noticed the following in my logs during the build process:

--
Feb 27 21:24:01 atombsd kernel: pid 38846 (try), uid 0: exited on
signal 10 (core dumped)
Feb 27 22:17:49 atombsd kernel: pid 89665 (conftest), uid 0: exited on
signal 6 (core dumped)
--

All ports seem to have built and installed succesfully. Again, what do
these mean and should I worry about it? :)

Thanks!

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: booting off a ZFS pool consisting of multiple striped mirror vdevs

2010-02-16 Thread Dan Naumov
 I don't know, but I plan to test that scenario in a few days.

 Matt

Please share the results when you're done, I am really curious :)


 It *should* work... I made changes a while back that allow for multiple
 vdevs to attach to the root.  In this case you should have 3 mirror
 vdevs attached to the root, so as long as the BIOS can enumerate all of
 the drives, we should find all of the vdevs and build the tree
 correctly.  It should be simple enough to test in qemu, except that the
 BIOS in qemu is a little broken and might not id all of the drives.

 robert.

If booting of a stripe of 3 mirrors should work assuming no BIOS bugs,
can you explain why is booting off simple stripes (of any number of
disks) currently unsupported? I haven't tested that myself, but
everywhere I look seems to indicate that booting off a simple stripe
doesn't work or is that everywhere also out of date after your
changes? :)


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


booting off a ZFS pool consisting of multiple striped mirror vdevs

2010-02-13 Thread Dan Naumov
Hello

I have succesfully tested and used a full ZFS install of FreeBSD 8.0
on both single disk and mirror disk configurations using both MBR and
GPT partitioning. AFAIK, with the more recent -CURRENT and -STABLE it
is also possible to boot off a root filesystem located on raidz/raidz2
pools. But what about booting off pools consisting of multiple striped
mirror or raidz vdevs? Like this:

Assume each disk looks like a half of a traditional ZFS mirror root
configuration using GPT

1: freebsd-boot
2: freebsd-swap
3: freebsd-zfs

|disk1+disk2| + |disk3+disk4| + |disk5+disk6|

My logic tells me that while booting off any of the 6 disks, boot0 and
boot1 stage should obviously work fine, but what about the boot2
stage? Can it properly handle booting off a root filesystem thats
striped across 3 mirror vdevs or is booting off a single mirror vdev
the best that one can do right now?


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


managing ZFS automatic mounts - FreeBSD deviates from Solaris?

2010-02-13 Thread Dan Naumov
Hello

From the SUN ZFS Administration Guide:
http://docs.sun.com/app/docs/doc/819-5461/gaztn?a=view

If ZFS is currently managing the file system but it is currently
unmounted, and the mountpoint property is changed, the file system
remains unmounted.

This does not seem to be the case in FreeBSD (8.0-RELEASE):

=
zfs get mounted tank/home
NAMEPROPERTYVALUE   SOURCE
tank/home   mounted no  -

zfs set mountpoint=/mnt/home tank/home

zfs get mounted tank/home
NAMEPROPERTYVALUE   SOURCE
tank/home   mounted no  -
=

This might not look like a serious issue at first, until you try doing
an installation of FreeBSD from FIXIT, trying to setup multiple
filesystems and their mountpoints at the very end of the installation
process. For example if you set the mountpoint of your
poolname/rootfs/usr to /usr as one of the finishing touches to the
system installation, it will immideately mount the filesystem,
instantly breaking your FIXIT environment and you cannot proceed any
further. Is this a known issue and/or should I submit a PR?

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: managing ZFS automatic mounts - FreeBSD deviates from Solaris?

2010-02-13 Thread Dan Naumov
On Sun, Feb 14, 2010 at 2:24 AM, Dan Naumov dan.nau...@gmail.com wrote:
 Hello

 From the SUN ZFS Administration Guide:
 http://docs.sun.com/app/docs/doc/819-5461/gaztn?a=view

 If ZFS is currently managing the file system but it is currently
 unmounted, and the mountpoint property is changed, the file system
 remains unmounted.

 This does not seem to be the case in FreeBSD (8.0-RELEASE):

 =
 zfs get mounted tank/home
 NAME            PROPERTY        VALUE           SOURCE
 tank/home               mounted         no                      -

 zfs set mountpoint=/mnt/home tank/home

 zfs get mounted tank/home
 NAME            PROPERTY        VALUE           SOURCE
 tank/home               mounted         no                      -
 =

 This might not look like a serious issue at first, until you try doing
 an installation of FreeBSD from FIXIT, trying to setup multiple
 filesystems and their mountpoints at the very end of the installation
 process. For example if you set the mountpoint of your
 poolname/rootfs/usr to /usr as one of the finishing touches to the
 system installation, it will immideately mount the filesystem,
 instantly breaking your FIXIT environment and you cannot proceed any
 further. Is this a known issue and/or should I submit a PR?

Oops, I managed to screw up my previous email. My point was to show
that mounted changes to YES after changing the mountpoint property
:)

- Dan
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: Intel D510MO Mini-ITX Motherboard - Is anyone using FreeBSD on this?

2010-01-29 Thread Dan Naumov
closing out this thread

I did go ahead and buy one of these boards and can now report that
FreeBSD-8.0/i386 boots and runs on it with no apparent problems.  A user
in the forums reports similar success running 8.0/amd64.

Extremely quiet and inexpensive board. At around $80, it is one-third
the cost of the Supermicro boards.

Not much use as a space heater, however; I've had it running for more
than 24 hours, busily recompiling ports, and the heatsink is just barely
warm to the touch.  Next time I reboot it I'm going to plug it into my
Kill-a-Watt meter to measure its power draw...

Reports of successes with both adm64 andi386 versions of 8.0-RELEASE
and Intel D510MO board have been showing up on a few different
discussion forums now.

I have to correct myself in regard to the Supermicro X7SPA-H board.
The board seems to be roughly 2 times as expensive as the Intel D510MO
(~75$ for the D150MO vs $150-170$ for the X7SPA-H). However, these
prices seem to only be like that in the US. When looking at European
prices, it seems that the D510MO board goes for about 75-80 euro and
the X7SPA-H goes for about 190-230 euro, depending on country and
reseller. So while the Supermicro board is roughly twice as expensive
as the Intel board in the US, it's roughly 3 times as expensive if you
are buying in Europe.

I still ended up going with the X7SPA-H though (finally pulled the
plug on ordering all the parts for a new system yesterday), mainly
because it saves me the trouble of immideately having to hunt for an
additional disk controller card: the D510MO has only 2 SATA ports and
a PCI slot for expansion (and I have REALLY burned myself badly on the
performance of PCI disk controller cards in the past), while the
X7SPA-H comes with 6 native SATA ports on an ICH9R controller and has
a 4xPCIE (in 16x physical form) for expansion.


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


booting off GPT partitions

2010-01-27 Thread Dan Naumov
Hey

I was under the impression that everyone and their dog is using GPT
partitioning in FreeBSD these days, including for boot drives and that
I was just being unlucky with my current NAS motherboard (Intel
D945GCLF2) having supposedly shaky support for GPT boot. But right now
I am having an email exchange with Supermicro support (whom I
contacted since I am pondering their X7SPA-H board for a new system),
who are telling me that booting off GPT requires UEFI BIOS, which is
supposedly a very new thing and that for example NONE of their current
motherboards have support for this.

Am I misunderstanding something or is the Supermicro support tech misguided?


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: Help booting FreeBSD with a ZFS root filesystem

2010-01-27 Thread Dan Naumov
I didn't want a mirror though, I wanted a stripe.

I still don't understand why what I'm doing isn't working.

As far as I know, having the root pool on a stripe isn't supported.

OpenSolaris supports having the root pool on a simple pool and a mirror pool.
FreeBSD supports having the root pool on a simple pool, mirror pool
and raidz, but afaik booting off raidz used to have issues.

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: Intel D510MO Mini-ITX Motherboard - Is anyone using FreeBSD on this?

2010-01-25 Thread Dan Naumov
Not to steal your discussion thread, but I thought I'd ask (and you'd
perhaps too be interested) what's the status of FreeBSD on these 2:

Supermicro X7SPA-H:
http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H
Supermicro X7SPA-HF:
http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=HIPMI=Y

Supermicro recently came out with quite a bunch of Atom-based
solutions and these 2 boards stuck out as havign 6 x SATA ports, which
make them tempting for a NAS solution.


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-25 Thread Dan Naumov
On Mon, Jan 25, 2010 at 7:33 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
 On Mon, 25 Jan 2010, Dan Naumov wrote:

 I've checked with the manufacturer and it seems that the Sil3124 in
 this NAS is indeed a PCI card. More info on the card in question is
 available at
 http://green-pcs.co.uk/2009/01/28/tranquil-bbs2-those-pci-cards/
 I have the card described later on the page, the one with 4 SATA ports
 and no eSATA. Alright, so it being PCI is probably a bottleneck in
 some ways, but that still doesn't explain the performance THAT bad,
 considering that same hardware, same disks, same disk controller push
 over 65mb/s in both reads and writes in Win2008. And agian, I am
 pretty sure that I've had close to expected results when I was

 The slow PCI bus and this card look like the bottleneck to me. Remember that
 your Win2008 tests were with just one disk, your zfs performance with just
 one disk was similar to Win2008, and your zfs performance with a mirror was
 just under 1/2 that.

 I don't think that your performance results are necessarily out of line for
 the hardware you are using.

 On an old Sun SPARC workstation with retrofitted 15K RPM drives on Ultra-160
 SCSI channel, I see a zfs mirror write performance of 67,317KB/second and a
 read performance of 124,347KB/second.  The drives themselves are capable of
 100MB/second range performance. Similar to yourself, I see 1/2 the write
 performance due to bandwidth limitations.

 Bob

There is lots of very sweet irony in my particular situiation.
Initially I was planning to use a single X25-M 80gb SSD in the
motherboard sata port for the actual OS installation as well as to
dedicate 50gb of it to a become a designaed L2ARC vdev for my ZFS
mirrors. The SSD attached to the motherboard port would be recognized
only as a SATA150 device for some reason, but I was still seeing
150mb/s throughput and sub 0.1 ms latencies on that disk simply
because of how crazy good the X25-M's are. However I ended up having
very bad issues with the Icydock 2,5 to 3,5 converter jacket I was
using to keep/fit the SSD in the system and it would randomly drop
write IO on heavy load due to bad connectors. Having finally figured
out the cause of my OS installations to the SSD going belly up during
applying updates, I decided to move the SSD to my desktop and use it
there instead, additionally thinking that my perhaps my idea of the
SSD was crazy overkill for what I need the system to do. Ironically
now that I am seeing how horrible the performance is when I am
operating on the mirror through this PCI card, I realize that
actually, my idea was pretty bloody brilliant, I just didn't really
know why at the time.

An L2ARC device on the motherboard port would really help me with
random read IO, but to work around the utterly poor write performance,
I would also need a dedicaled SLOG ZIL device. The catch is that while
L2ARC devices and be removed from the pool at will (should the device
up and die all of a sudden), the dedicated ZILs cannot and currently a
missing ZIL device will render the pool it's included in be unable
to import and become inaccessible. There is some work happening in
Solaris to implement removing SLOGs from a pool, but that work hasn't
yet found it's way in FreeBSD yet.


- Sincerely,
Dan Naumov

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-25 Thread Dan Naumov
On Mon, Jan 25, 2010 at 9:34 AM, Dan Naumov dan.nau...@gmail.com wrote:
 On Mon, Jan 25, 2010 at 7:33 AM, Bob Friesenhahn
 bfrie...@simple.dallas.tx.us wrote:
 On Mon, 25 Jan 2010, Dan Naumov wrote:

 I've checked with the manufacturer and it seems that the Sil3124 in
 this NAS is indeed a PCI card. More info on the card in question is
 available at
 http://green-pcs.co.uk/2009/01/28/tranquil-bbs2-those-pci-cards/
 I have the card described later on the page, the one with 4 SATA ports
 and no eSATA. Alright, so it being PCI is probably a bottleneck in
 some ways, but that still doesn't explain the performance THAT bad,
 considering that same hardware, same disks, same disk controller push
 over 65mb/s in both reads and writes in Win2008. And agian, I am
 pretty sure that I've had close to expected results when I was

 The slow PCI bus and this card look like the bottleneck to me. Remember that
 your Win2008 tests were with just one disk, your zfs performance with just
 one disk was similar to Win2008, and your zfs performance with a mirror was
 just under 1/2 that.

 I don't think that your performance results are necessarily out of line for
 the hardware you are using.

 On an old Sun SPARC workstation with retrofitted 15K RPM drives on Ultra-160
 SCSI channel, I see a zfs mirror write performance of 67,317KB/second and a
 read performance of 124,347KB/second.  The drives themselves are capable of
 100MB/second range performance. Similar to yourself, I see 1/2 the write
 performance due to bandwidth limitations.

 Bob

 There is lots of very sweet irony in my particular situiation.
 Initially I was planning to use a single X25-M 80gb SSD in the
 motherboard sata port for the actual OS installation as well as to
 dedicate 50gb of it to a become a designaed L2ARC vdev for my ZFS
 mirrors. The SSD attached to the motherboard port would be recognized
 only as a SATA150 device for some reason, but I was still seeing
 150mb/s throughput and sub 0.1 ms latencies on that disk simply
 because of how crazy good the X25-M's are. However I ended up having
 very bad issues with the Icydock 2,5 to 3,5 converter jacket I was
 using to keep/fit the SSD in the system and it would randomly drop
 write IO on heavy load due to bad connectors. Having finally figured
 out the cause of my OS installations to the SSD going belly up during
 applying updates, I decided to move the SSD to my desktop and use it
 there instead, additionally thinking that my perhaps my idea of the
 SSD was crazy overkill for what I need the system to do. Ironically
 now that I am seeing how horrible the performance is when I am
 operating on the mirror through this PCI card, I realize that
 actually, my idea was pretty bloody brilliant, I just didn't really
 know why at the time.

 An L2ARC device on the motherboard port would really help me with
 random read IO, but to work around the utterly poor write performance,
 I would also need a dedicaled SLOG ZIL device. The catch is that while
 L2ARC devices and be removed from the pool at will (should the device
 up and die all of a sudden), the dedicated ZILs cannot and currently a
 missing ZIL device will render the pool it's included in be unable
 to import and become inaccessible. There is some work happening in
 Solaris to implement removing SLOGs from a pool, but that work hasn't
 yet found it's way in FreeBSD yet.


 - Sincerely,
 Dan Naumov

OK final question: if/when I go about adding more disks to the system
and want redundancy, am I right in thinking that: ZFS pool of
disk1+disk2 mirror + disk3+disk4 mirror (a la RAID10) would completely
murder my write and read performance even way below the current 28mb/s
/ 50mb/s I am seeing with 2 disks on that PCI controller and that in
order to have the least negative impact, I should simply have 2
independent mirrors in 2 independent pools (with the 5th disk slot in
the NAS given to a non-redundant single disk running off the one
available SATA port on the motherboard)?

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-25 Thread Dan Naumov
On Mon, Jan 25, 2010 at 7:40 PM, Alexander Motin m...@freebsd.org wrote:
 Artem Belevich wrote:
 aoc-sat2-mv8 was somewhat slower compared to ICH9 or LSI1068
 controllers when I tried it with 6 and 8 disks.
 I think the problem is that MV8 only does 32K per transfer and that
 does seem to matter when you have 8 drives hooked up to it. I don't
 have hard numbers, but peak throughput of MV8 with 8-disk raidz2 was
 noticeably lower than that of LSI1068 in the same configuration. Both
 LSI1068 and MV2 were on the same PCI-X bus. It could be a driver
 limitation. The driver for Marvel SATA controllers in NetBSD seems a
 bit more advanced compared to what's in FreeBSD.

 I also wouldn't recommend to use Marvell 88SXx0xx controllers now. While
 potentially they are interesting, lack of documentation and numerous
 hardware bugs make existing FreeBSD driver very limited there.

 I wish intel would make cheap multi-port PCIe SATA card based on their
 AHCI controllers.

 Indeed. Intel on-board AHCI SATA controllers are fastest from all I have
 tested. Unluckily, they are not producing discrete versions. :(

 Now, if discrete solution is really needed, I would still recommend
 SiI3124, but with proper PCI-X 64bit/133MHz bus or built-in PCIe x8
 bridge. They are fast and have good new siis driver.

 On Mon, Jan 25, 2010 at 3:29 AM, Pete French
 petefre...@ticketswitch.com wrote:
 I like to use pci-x with aoc-sat2-mv8 cards or pci-e cardsthat way you
 get a lot more bandwidth..
 I would goalong with that - I have precisely the same controller, with
 a pair of eSATA drives, running ZFS mirrored. But I get a nice 100
 meg/second out of them if I try. My controller is, however on PCI-X, not
 PCI. It's a shame PCI-X appears to have gone the way of the dinosaur :-(

 --
 Alexander Motin

Alexander, since you seem to be experienced in the area, what do you
think of these 2 for use in a FreeBSD8 ZFS NAS:

http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H
http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=HIPMI=Y

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-25 Thread Dan Naumov
On Mon, Jan 25, 2010 at 8:32 PM, Alexander Motin m...@freebsd.org wrote:
 Dan Naumov wrote:
 Alexander, since you seem to be experienced in the area, what do you
 think of these 2 for use in a FreeBSD8 ZFS NAS:

 http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H
 http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=HIPMI=Y

 Unluckily I haven't yet touched Atom family close yet, so I can't say
 about it's performance. But higher desktop level (even bit old) ICH9R
 chipset there is IMHO a good option. It is MUCH better then ICH7, often
 used with previous Atoms. If I had nice small Mini-ITX case with 6 drive
 bays, I would definitely look for some board like that to build home
 storage.

 --
 Alexander Motin

CPU-performance-wise, I am not really worried. The current system is
an Atom 330 and even that is a bit overkill for what I do with it and
from what I am seeing, the new Atom D510 used on those boards is a
tiny bit faster. What I want and care about for this system are
reliability, stability, low power use, quietness and fast disk
read/write speeds. I've been hearing some praise of ICH9R and 6 native
SATA ports should be enough for my needs. AFAIK, the Intel 82574L
network cards included on those are also very well supported?

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Loader, MBR and the boot process

2010-01-24 Thread Dan Naumov
On Sun, Jan 24, 2010 at 5:29 PM, John j...@starfire.mn.org wrote:
 On Fri, Jan 22, 2010 at 07:02:53AM +0200, Dan Naumov wrote:
 On Fri, Jan 22, 2010 at 6:49 AM, Dan Naumov dan.nau...@gmail.com wrote:
  On Fri, Jan 22, 2010 at 6:12 AM, Thomas K. f...@gothschlampen.com wrote:
  On Fri, Jan 22, 2010 at 05:57:23AM +0200, Dan Naumov wrote:
 
  Hi,
 
  I recently found a nifty FreeBSD ZFS root installation script and
  been reworking it a bit to suit my needs better, including changing it
  from GPT to MBR partitioning. However, I was stumped, even though I
  had done everything right (or so I thought), the system would get
  stuck at Loader and refuse to go anywhere. After trying over a dozen
 
  probably this line is the cause:
 
  dd if=/mnt2/boot/zfsboot of=/dev/${TARGETDISK}s1a skip=1 seek=1024
 
  Unless by swap first you meant the on-disk location, and not the
  partition letter. If swap is partition a, you're writing the loader
  into swapspace.
 
 
  Regards,
  Thomas
 
  At first you made me feel silly, but then I decided to double-check, I
  uncommented the swap line in the partitioning part again, ensured I
  was writing the bootloader to ${TARGETDISK}s1b and ran the script.
  Same problem, hangs at loader. Again, if I comment out the swap,
  giving the entire slice to ZFS and then write the bootloader to
  ${TARGETDISK}s1a, run the script, everything works.

 I have also just tested creating 2 slices, like this:

 gpart create -s mbr ${TARGETDISK}
 gpart add -s 3G -t freebsd ${TARGETDISK}
 gpart create -s BSD ${TARGETDISK}s1
 gpart add -t freebsd-swap ${TARGETDISK}s1

 gpart add -t freebsd ${TARGETDISK}
 gpart create -s BSD ${TARGETDISK}s2
 gpart add -t freebsd-zfs ${TARGETDISK}s2

 gpart set -a active -i 2 ${TARGETDISK}
 gpart bootcode -b /mnt2/boot/boot0 ${TARGETDISK}


 and later:

 dd if=/mnt2/boot/zfsboot of=/dev/${TARGETDISK}s2 count=1
 dd if=/mnt2/boot/zfsboot of=/dev/${TARGETDISK}s2a skip=1 seek=1024


 Putting the swap into it's own slice and then putting FreeBSD into
 it's own slice worked fine. So why the hell can't they both coexist in
 1 slice if the swap comes first?

 I know what the answer to this USED to be, but I don't know if it is
 still true (obviously, I think so, I or wouldn't waste your time).

 The filesystem code is all carefully written to avoid the very
 first few sector of the partition.  That's because the partition
 table is there for the first filesystem of the slice (or disk).
 That's a tiny amout of space wasted, because it's also skipped on
 all the other filesystems even though there's not actually anything
 there, but it was a small inefficency, even in the 70's.

 Swap does not behave that way.  SWAP will begin right at the slice
 boundry, with 0 offset.  As long as it's not the first partition, no
 harm, no foul.  If it IS the first partition, you just nuked your partition
 table.  As long as SWAP owns the slice, again, no harm, no foul, but
 if there were filesystems BEHIND it, you just lost 'em.

 That's the way it always used to be, and I think it still is.  SWAP can
 only be first if it is the ONLY thing using that slice (disk), otherwise,
 you need a filesystem first to protect the partition table.
 --

 John Lind
 j...@starfire.mn.org

This explanation does sound logical, but holy crap, if this is the
case, you'd think there would be bells, whistles and huge red label
warnings in EVERY FreeBSD installation / partitioning guide out there
warning people to not put swap first (unless given a dedicated slice)
under any circumstances. The warnings were nowhere to be seen and lots
of pointy hair first greyed and were then lost during the process of
me trying to figure out why my system would install but wouldn't boot.

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-24 Thread Dan Naumov
 =6.278 msec
Short backward:   400 iter in   2.233714 sec =5.584 msec
Seq outer:   2048 iter in   0.427523 sec =0.209 msec
Seq inner:   2048 iter in   0.341185 sec =0.167 msec
Transfer rates:
outside:   102400 kbytes in   1.516305 sec =67533 kbytes/sec
middle:102400 kbytes in   1.351877 sec =75747 kbytes/sec
inside:102400 kbytes in   2.090069 sec =48994 kbytes/sec

===

The exact same disks, on the exact same machine, are well capable of
65+ mb/s throughput (tested with ATTO multiple times) with different
block sizes using Windows 2008 Server and NTFS. So what would be the
cause of these very low Bonnie result numbers in my case? Should I try
some other benchmark and if so, with what parameters?

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-24 Thread Dan Naumov
 to the same controller. Only difference was
that in Windows the disks weren't in a mirror configuration but were
tested individually. I do understand that a mirror setup offers
roughly the same write speed as individual disk, while the read speed
usually varies from equal to individual disk speed to nearly the
throughput of both disks combined depending on the implementation,
but there is no obvious reason I am seeing why my setup offers both
read and write speeds roughly 1/3 to 1/2 of what the individual disks
are capable of. Dmesg shows:

atapci0: SiI 3124 SATA300 controller port 0x1000-0x100f mem
0x90108000-0x9010807f,0x9010-0x90107fff irq 21 at device 0.0 on
pci4
ad8: 1907729MB WDC WD20EADS-32R6B0 01.00A01 at ata4-master SATA300
ad10: 1907729MB WDC WD20EADS-00R6B0 01.00A01 at ata5-master SATA300

I do recall also testing an alternative configuration in the past,
where I would boot off an UFS disk and have the ZFS mirror consist of
2 discs directly. The bonnie numbers in that case were in line with my
expectations, I was seeing 65-70mb/s. Note: again, exact same
hardware, exact same disks attached to the exact same controller. In
my knowledge, Solaris/OpenSolaris has an issue where they have to
automatically disable disk cache if ZFS is used on top of partitions
instead of raw disks, but to my knowledge (I recall reading this from
multiple reputable sources) this issue does not affect FreeBSD.

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-24 Thread Dan Naumov
On Sun, Jan 24, 2010 at 7:42 PM, Dan Naumov dan.nau...@gmail.com wrote:
 On Sun, Jan 24, 2010 at 7:05 PM, Jason Edwards sub.m...@gmail.com wrote:
 Hi Dan,

 I read on FreeBSD mailinglist you had some performance issues with ZFS.
 Perhaps i can help you with that.

 You seem to be running a single mirror, which means you won't have any speed
 benefit regarding writes, and usually RAID1 implementations offer little to
 no acceleration to read requests also; some even just read from the master
 disk and don't touch the 'slave' mirrored disk unless when writing. ZFS is
 alot more modern however, although i did not test performance of its mirror
 implementation.

 But, benchmarking I/O can be tricky:

 1) you use bonnie, but bonnie's tests are performed without a 'cooldown'
 period between the tests; meaning that when test 2 starts, data from test 1
 is still being processed. For single disks and simple I/O this is not so
 bad, but for large write-back buffers and more complex I/O buffering, this
 may be inappropriate. I had patched bonnie some time in the past, but if you
 just want a MB/s number you can use DD for that.

 2) The diskinfo tiny benchmark is single queue only i assume, meaning that
 it would not scale well or at all on RAID-arrays. Actual filesystems on
 RAID-arrays use multiple-queue; meaning it would not read one sector at a
 time, but read 8 blocks (of 16KiB) ahead; this is called read-ahead and
 for traditional UFS filesystems its controlled by the sysctl vfs.read_max
 variable. ZFS works differently though, but you still need a real
 benchmark.

 3) You need low-latency hardware; in particular, no PCI controller should be
 used. Only PCI-express based controllers or chipset-integrated Serial ATA
 cotrollers have proper performance. PCI can hurt performance very badly, and
 has high interrupt CPU usage. Generally you should avoid PCI. PCI-express is
 fine though, its a completely different interface that is in many ways the
 opposite of what PCI was.

 4) Testing actual realistic I/O performance (in IOps) is very difficult. But
 testing sequential performance should be alot easier. You may try using dd
 for this.


 For example, you can use dd on raw devices:

 dd if=/dev/ad4 of=/dev/null bs=1M count=1000

 I will explain each parameter:

 if=/dev/ad4 is the input file, the read source

 of=/dev/null is the output file, the write destination. /dev/null means it
 just goes no-where; so this is a read-only benchmark

 bs=1M is the blocksize, howmuch data to transfer per time. default is 512 or
 the sector size; but that's very slow. A value between 64KiB and 1024KiB is
 appropriate. bs=1M will select 1MiB or 1024KiB.

 count=1000 means transfer 1000 pieces, and with bs=1M that means 1000 * 1MiB
 = 1000MiB.



 This example was raw reading sequentially from the start of the device
 /dev/ad4. If you want to test RAIDs, you need to work at the filesystem
 level. You can use dd for that too:

 dd if=/dev/zero of=/path/to/ZFS/mount/zerofile.000 bs=1M count=2000

 This command will read from /dev/zero (all zeroes) and write to a file on
 ZFS-mounted filesystem, it will create the file zerofile.000 and write
 2000MiB of zeroes to that file.
 So this command tests write-performance of the ZFS-mounted filesystem. To
 test read performance, you need to clear caches first by unmounting that
 filesystem and re-mounting it again. This would free up memory containing
 parts of the filesystem as cached (reported in top as Inact(ive) instead
 of Free).

 Please do make sure you double-check a dd command before running it, and run
 as normal user instead of root. A wrong dd command may write to the wrong
 destination and do things you don't want. The only real thing you need to
 check is the write destination (of=). That's where dd is going to write
 to, so make sure its the target you intended. A common mistake made by
 myself was to write dd of=... if=... (starting with of instead of if) and
 thus actually doing something the other way around than what i was meant to
 do. This can be disastrous if you work with live data, so be careful! ;-)

 Hope any of this was helpful. During the dd benchmark, you can of course
 open a second SSH terminal and start gstat to see the devices current I/O
 stats.

 Kind regards,
 Jason

 Hi and thanks for your tips, I appreciate it :)

 [j...@atombsd ~]$ dd if=/dev/zero of=/home/jago/test1 bs=1M count=1024
 1024+0 records in
 1024+0 records out
 1073741824 bytes transferred in 36.206372 secs (29656156 bytes/sec)

 [j...@atombsd ~]$ dd if=/dev/zero of=/home/jago/test2 bs=1M count=4096
 4096+0 records in
 4096+0 records out
 4294967296 bytes transferred in 143.878615 secs (29851325 bytes/sec)

 This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
 4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
 bonnie results. It also sadly seems to confirm the very slow speed :(
 The disks are attached to a 4-port Sil3124 controller and again, my

Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-24 Thread Dan Naumov
On Sun, Jan 24, 2010 at 8:12 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
 On Sun, 24 Jan 2010, Dan Naumov wrote:

 This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
 4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
 bonnie results. It also sadly seems to confirm the very slow speed :(
 The disks are attached to a 4-port Sil3124 controller and again, my
 Windows benchmarks showing 65mb/s+ were done on exact same machine,
 with same disks attached to the same controller. Only difference was
 that in Windows the disks weren't in a mirror configuration but were
 tested individually. I do understand that a mirror setup offers
 roughly the same write speed as individual disk, while the read speed
 usually varies from equal to individual disk speed to nearly the
 throughput of both disks combined depending on the implementation,
 but there is no obvious reason I am seeing why my setup offers both
 read and write speeds roughly 1/3 to 1/2 of what the individual disks
 are capable of. Dmesg shows:

 There is a mistatement in the above in that a mirror setup offers roughly
 the same write speed as individual disk.  It is possible for a mirror setup
 to offer a similar write speed to an individual disk, but it is also quite
 possible to get 1/2 (or even 1/3) the speed. ZFS writes to a mirror pair
 requires two independent writes.  If these writes go down independent I/O
 paths, then there is hardly any overhead from the 2nd write.  If the writes
 go through a bandwidth-limited shared path then they will contend for that
 bandwidth and you will see much less write performance.

 As a simple test, you can temporarily remove the mirror device from the pool
 and see if the write performance dramatically improves. Before doing that,
 it is useful to see the output of 'iostat -x 30' while under heavy write
 load to see if one device shows a much higher svc_t value than the other.

Ow, ow, WHOA:

atombsd# zpool offline tank ad8s1a

[j...@atombsd ~]$ dd if=/dev/zero of=/home/jago/test3 bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 16.826016 secs (63814382 bytes/sec)

Offlining one half of the mirror bumps DD write speed from 28mb/s to
64mb/s! Let's see how Bonnie results change:

Mirror with both parts attached:

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
 8192 18235 46.7 23137 19.9 13927 13.6 24818 49.3 44919 17.3 134.3  2.1

Mirror with 1 half offline:

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
 1024 22888 58.0 41832 35.1 22764 22.0 26775 52.3 54233 18.3 166.0  1.6

Ok, the Bonnie results have improved, but only very little.

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-24 Thread Dan Naumov
On Sun, Jan 24, 2010 at 8:34 PM, Jason Edwards sub.m...@gmail.com wrote:
 ZFS writes to a mirror pair
 requires two independent writes.  If these writes go down independent I/O
 paths, then there is hardly any overhead from the 2nd write.  If the
 writes
 go through a bandwidth-limited shared path then they will contend for that
 bandwidth and you will see much less write performance.

 What he said may confirm my suspicion on PCI. So if you could try the same
 with real Serial ATA via chipset or PCI-e controller you can confirm this
 story. I would be very interested. :P

 Kind regards,
 Jason


This wouldn't explain why ZFS mirror on 2 disks directly, on the exact
same controller (with the OS running off a separate disks) results in
expected performance, while having the OS run off/on a ZFS mirror
running on top of MBR-partitioned disks, on the same controller,
results in very low speed.

- Dan
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-24 Thread Dan Naumov
On Sun, Jan 24, 2010 at 11:53 PM, Alexander Motin m...@freebsd.org wrote:
 Dan Naumov wrote:
 This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
 4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
 bonnie results. It also sadly seems to confirm the very slow speed :(
 The disks are attached to a 4-port Sil3124 controller and again, my
 Windows benchmarks showing 65mb/s+ were done on exact same machine,
 with same disks attached to the same controller. Only difference was
 that in Windows the disks weren't in a mirror configuration but were
 tested individually. I do understand that a mirror setup offers
 roughly the same write speed as individual disk, while the read speed
 usually varies from equal to individual disk speed to nearly the
 throughput of both disks combined depending on the implementation,
 but there is no obvious reason I am seeing why my setup offers both
 read and write speeds roughly 1/3 to 1/2 of what the individual disks
 are capable of. Dmesg shows:

 atapci0: SiI 3124 SATA300 controller port 0x1000-0x100f mem
 0x90108000-0x9010807f,0x9010-0x90107fff irq 21 at device 0.0 on
 pci4
 ad8: 1907729MB WDC WD20EADS-32R6B0 01.00A01 at ata4-master SATA300
 ad10: 1907729MB WDC WD20EADS-00R6B0 01.00A01 at ata5-master SATA300

 8.0-RELEASE, and especially 8-STABLE provide alternative, much more
 functional driver for this controller, named siis(4). If your SiI3124
 card installed into proper bus (PCI-X or PCIe x4/x8), it can be really
 fast (up to 1GB/s was measured).

 --
 Alexander Motin

Sadly, it seems that utilizing the new siis driver doesn't do much good:

Before utilizing siis:

iozone -s 4096M -r 512 -i0 -i1
random
randombkwd   record   stride
  KB  reclen   write rewritereadrereadread
writeread  rewrite read   fwrite frewrite   fread  freread
 4194304 512   28796   287665161050695

After enabling siis in loader.conf (and ensuring the disks show up as ada):

iozone -s 4096M -r 512 -i0 -i1

random
randombkwd   record   stride
  KB  reclen   write rewritereadrereadread
writeread  rewrite read   fwrite frewrite   fread  freread
 4194304 512   28781   288974721450540

I've checked with the manufacturer and it seems that the Sil3124 in
this NAS is indeed a PCI card. More info on the card in question is
available at http://green-pcs.co.uk/2009/01/28/tranquil-bbs2-those-pci-cards/
I have the card described later on the page, the one with 4 SATA ports
and no eSATA. Alright, so it being PCI is probably a bottleneck in
some ways, but that still doesn't explain the performance THAT bad,
considering that same hardware, same disks, same disk controller push
over 65mb/s in both reads and writes in Win2008. And agian, I am
pretty sure that I've had close to expected results when I was
booting an UFS FreeBSD installation off an SSD (attached directly to
SATA port on the motherboard) while running the same kinds of
benchmarks with Bonnie and DD on a ZFS mirror made directly on top of
2 raw disks.


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-24 Thread Dan Naumov
On Mon, Jan 25, 2010 at 2:14 AM, Dan Naumov dan.nau...@gmail.com wrote:
 On Sun, Jan 24, 2010 at 11:53 PM, Alexander Motin m...@freebsd.org wrote:
 Dan Naumov wrote:
 This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
 4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
 bonnie results. It also sadly seems to confirm the very slow speed :(
 The disks are attached to a 4-port Sil3124 controller and again, my
 Windows benchmarks showing 65mb/s+ were done on exact same machine,
 with same disks attached to the same controller. Only difference was
 that in Windows the disks weren't in a mirror configuration but were
 tested individually. I do understand that a mirror setup offers
 roughly the same write speed as individual disk, while the read speed
 usually varies from equal to individual disk speed to nearly the
 throughput of both disks combined depending on the implementation,
 but there is no obvious reason I am seeing why my setup offers both
 read and write speeds roughly 1/3 to 1/2 of what the individual disks
 are capable of. Dmesg shows:

 atapci0: SiI 3124 SATA300 controller port 0x1000-0x100f mem
 0x90108000-0x9010807f,0x9010-0x90107fff irq 21 at device 0.0 on
 pci4
 ad8: 1907729MB WDC WD20EADS-32R6B0 01.00A01 at ata4-master SATA300
 ad10: 1907729MB WDC WD20EADS-00R6B0 01.00A01 at ata5-master SATA300

 8.0-RELEASE, and especially 8-STABLE provide alternative, much more
 functional driver for this controller, named siis(4). If your SiI3124
 card installed into proper bus (PCI-X or PCIe x4/x8), it can be really
 fast (up to 1GB/s was measured).

 --
 Alexander Motin

 Sadly, it seems that utilizing the new siis driver doesn't do much good:

 Before utilizing siis:

 iozone -s 4096M -r 512 -i0 -i1
                                                            random
 random    bkwd   record   stride
              KB  reclen   write rewrite    read    reread    read
 write    read  rewrite     read   fwrite frewrite   fread  freread
         4194304     512   28796   28766    51610    50695

 After enabling siis in loader.conf (and ensuring the disks show up as ada):

 iozone -s 4096M -r 512 -i0 -i1

                                                            random
 random    bkwd   record   stride
              KB  reclen   write rewrite    read    reread    read
 write    read  rewrite     read   fwrite frewrite   fread  freread
         4194304     512   28781   28897    47214    50540

Just to add to the numbers above, exact same benchmark, on 1 disk
(detached 2nd disk from the mirror) while using the siis driver:

random
randombkwd   record   stride
  KB  reclen   write rewritereadrereadread
writeread  rewrite read   fwrite frewrite   fread  freread
 4194304 512   57760   563716886774047


- Dan
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


posting coding bounties, appropriate money amounts?

2010-01-22 Thread Dan Naumov
Hello

I am curious about posting some coding bounties, my current interest
revolves around improving the ZVOL functionality in FreeBSD: fixing
the known ZVOL SWAP reliability/stability problems as well as making
ZVOLs work as a dumpon device (as is already the case in OpenSolaris)
for crash dumps. I am a private individual and not some huge Fortune
100 and while I am not exactly rich, I am willing to put some of my
personal money towards this. I am curious though, what would be the
best way to approach this: directly approaching committer(s) with the
know-how-and-why of the areas involved or through the FreeBSD
Foundation? And how would one go about calculating the appropriate
amount of money for such a thing?

Thanks.

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


RE: Drive errors in raidz array

2010-01-22 Thread Dan Naumov
 I have a system with 24 drives in raidz2.

Congrats, you answered your own question within the first sentance :)

ANSWER: As per the ZFS documentation, don't do raidz/raidz2 vdev
groups bigger than 9 vdevs per group or bad things (tm) will happen.
Google will tell you more.

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Loader, MBR and the boot process

2010-01-21 Thread Dan Naumov
I recently found a nifty FreeBSD ZFS root installation script and
been reworking it a bit to suit my needs better, including changing it
from GPT to MBR partitioning. However, I was stumped, even though I
had done everything right (or so I thought), the system would get
stuck at Loader and refuse to go anywhere. After trying over a dozen
different things, it downed on me to change the partition order inside
the slice, I had 1) swap 2) freebsd-zfs and for the test, I got rid of
swap altogether and gave the entire slice to the freebsd-zfs
partition. Suddenly, my problem went away and the system booted just
fine. So it seems that Loader requires that the partition containing
the files vital to the boot is the first partition on the slice and
that swap first, then the rest doesn't work.

The thing is, I am absolutely positive that in the past, I've had
sysinstall created installs using MBR partitioning and that I had swap
as my first partition inside the slice and that it all worked dandy.
Has this changed at some point? Oh, and for the curious the
installation script is here: http://jago.pp.fi/zfsmbrv1-works.sh


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Loader, MBR and the boot process

2010-01-21 Thread Dan Naumov
On Fri, Jan 22, 2010 at 6:12 AM, Thomas K. f...@gothschlampen.com wrote:
 On Fri, Jan 22, 2010 at 05:57:23AM +0200, Dan Naumov wrote:

 Hi,

 I recently found a nifty FreeBSD ZFS root installation script and
 been reworking it a bit to suit my needs better, including changing it
 from GPT to MBR partitioning. However, I was stumped, even though I
 had done everything right (or so I thought), the system would get
 stuck at Loader and refuse to go anywhere. After trying over a dozen

 probably this line is the cause:

 dd if=/mnt2/boot/zfsboot of=/dev/${TARGETDISK}s1a skip=1 seek=1024

 Unless by swap first you meant the on-disk location, and not the
 partition letter. If swap is partition a, you're writing the loader
 into swapspace.


 Regards,
 Thomas

At first you made me feel silly, but then I decided to double-check, I
uncommented the swap line in the partitioning part again, ensured I
was writing the bootloader to ${TARGETDISK}s1b and ran the script.
Same problem, hangs at loader. Again, if I comment out the swap,
giving the entire slice to ZFS and then write the bootloader to
${TARGETDISK}s1a, run the script, everything works.


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Loader, MBR and the boot process

2010-01-21 Thread Dan Naumov
On Fri, Jan 22, 2010 at 6:49 AM, Dan Naumov dan.nau...@gmail.com wrote:
 On Fri, Jan 22, 2010 at 6:12 AM, Thomas K. f...@gothschlampen.com wrote:
 On Fri, Jan 22, 2010 at 05:57:23AM +0200, Dan Naumov wrote:

 Hi,

 I recently found a nifty FreeBSD ZFS root installation script and
 been reworking it a bit to suit my needs better, including changing it
 from GPT to MBR partitioning. However, I was stumped, even though I
 had done everything right (or so I thought), the system would get
 stuck at Loader and refuse to go anywhere. After trying over a dozen

 probably this line is the cause:

 dd if=/mnt2/boot/zfsboot of=/dev/${TARGETDISK}s1a skip=1 seek=1024

 Unless by swap first you meant the on-disk location, and not the
 partition letter. If swap is partition a, you're writing the loader
 into swapspace.


 Regards,
 Thomas

 At first you made me feel silly, but then I decided to double-check, I
 uncommented the swap line in the partitioning part again, ensured I
 was writing the bootloader to ${TARGETDISK}s1b and ran the script.
 Same problem, hangs at loader. Again, if I comment out the swap,
 giving the entire slice to ZFS and then write the bootloader to
 ${TARGETDISK}s1a, run the script, everything works.

I have also just tested creating 2 slices, like this:

gpart create -s mbr ${TARGETDISK}
gpart add -s 3G -t freebsd ${TARGETDISK}
gpart create -s BSD ${TARGETDISK}s1
gpart add -t freebsd-swap ${TARGETDISK}s1

gpart add -t freebsd ${TARGETDISK}
gpart create -s BSD ${TARGETDISK}s2
gpart add -t freebsd-zfs ${TARGETDISK}s2

gpart set -a active -i 2 ${TARGETDISK}
gpart bootcode -b /mnt2/boot/boot0 ${TARGETDISK}


and later:

dd if=/mnt2/boot/zfsboot of=/dev/${TARGETDISK}s2 count=1
dd if=/mnt2/boot/zfsboot of=/dev/${TARGETDISK}s2a skip=1 seek=1024


Putting the swap into it's own slice and then putting FreeBSD into
it's own slice worked fine. So why the hell can't they both coexist in
1 slice if the swap comes first?


- Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


8.0-RELEASE / gpart / GPT / marking a partition as active

2010-01-19 Thread Dan Naumov
It seems that quite a few BIOSes have serious issues booting off disks
using GPT partitioning when no partition present is marked as
active. See http://www.freebsd.org/cgi/query-pr.cgi?pr=115406cat=bin
for a prime example.

In 8.0-RELEASE, using gpart, setting a slice as active in MBR
partitioning mode is trivial, ie:

gpart set -a active -i 1 DISKNAME

However, trying to do the same thing with GPT partitioning yields no results:

gpart set -a active -i 1 DISKNAME
gpart: attrib 'active': Device not configured

As a result of this issue, I can configure and make a succesfull
install using GPT in 8.0, but I cannot boot off it using my Intel
D945GCLF2 board.

I have found this discussion from about a month ago:
http://www.mail-archive.com/freebsd-sta...@freebsd.org/msg106918.html
where Robert mentions that gpart set -a active -i 1 is no longer
needed in 8-STABLE, because the pmbr will be marked as active during
the installation of the bootcode. Is there anything I can do to
archieve the same result in 8.0-RELEASE or is installing from a
snapshop of 8-STABLE my only option?

Thanks.

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: 8.0-RELEASE / gpart / GPT / marking a partition as active

2010-01-19 Thread Dan Naumov
On 1/19/2010 12:11 PM, Dan Naumov wrote:
 It seems that quite a few BIOSes have serious issues booting off disks
 using GPT partitioning when no partition present is marked as
 active. See http://www.freebsd.org/cgi/query-pr.cgi?pr=115406cat=bin
 for a prime example.

 In 8.0-RELEASE, using gpart, setting a slice as active in MBR
 partitioning mode is trivial, ie:

 gpart set -a active -i 1 DISKNAME

 However, trying to do the same thing with GPT partitioning yields no results:

 gpart set -a active -i 1 DISKNAME
 gpart: attrib 'active': Device not configured

 As a result of this issue, I can configure and make a succesfull
 install using GPT in 8.0, but I cannot boot off it using my Intel
 D945GCLF2 board.

 I have found this discussion from about a month ago:
 http://www.mail-archive.com/freebsd-sta...@freebsd.org/msg106918.html
 where Robert mentions that gpart set -a active -i 1 is no longer
 needed in 8-STABLE, because the pmbr will be marked as active during
 the installation of the bootcode. Is there anything I can do to
 archieve the same result in 8.0-RELEASE or is installing from a
 snapshop of 8-STABLE my only option?

 After using gpart to create the GPT (and thus the PMBR and its
 bootcode), why not simply use fdisk -a -1 DISKNAME to set the PMBR
 partition active?

According to the fdisk output, the partition flag did change from 0 to
80. Can the fdisk: Class not found error showing up at the very end
of the procedure of doing fdisk -a -1 DISKNAME be safely ignored?

- Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


(SOLVED) Re: installing FreeBSD 8 on SSDs and UFS2 - partition alignment, block sizes, what does one need to know?

2010-01-15 Thread Dan Naumov
On Fri, Jan 15, 2010 at 6:38 PM, Rick Macklem rmack...@uoguelph.ca wrote:


 On Tue, 12 Jan 2010, Dan Naumov wrote:

 For my upcoming storage system, the OS install is going to be on a
 80gb Intel SSD disk and for various reasons, I am now pretty convinced
 to stick with UFS2 for the root partition (the actual data pool will
 be ZFS using traditional SATA disks). I am probably going to use GPT
 partitioning and have the SSD host the swap, boot, root and a few
 other partitions. What do I need to know in regards to partition
 alignment and filesystem block sizes to get the best performance out
 of the Intel SSDs?

 I can't help with your question, but I thought I'd mention that there
 was a recent post (on freebsd-current, I think?) w.r.t. using an SSD
 for the ZFS log file. It suggested that that helped with ZFS perf., so
 you might want to look for the message.

 rick

I have managed to figure out the essential things to know by know, I
just wish there was a single, easy to grasp webpage or HOWTO
describing and whys and hows so I wouldn't have had had to spend the
entire day googling things to get a proper grasp on the issue :)

To (perhaps a bit too much) simplify things, if you are using an SSD
with FreeeBSD, you:

1) Should use GPT

2) Should create the freebsd-boot partition as normal (to ensure
compatibility with some funky BIOSes)

3) All additional partitions should be aligned, meaning that their
boundaries should be dividable by 1024kb (that's 2048 logical blocks
in gpart). Ie, having created your freeebsd-boot, your next partition
should start at block 2048 and the partition size should be dividable
by 2048 blocks. This applies to ALL further partitions added to the
disk, so you WILL end up having some empty space between them, but a
few MBs worth of space will be lost at most.

P.S: My oversimplification was in that MOST SSDs will be just fine
with a 512 kb / 1024 block alignment. However, _ALL_ SSDs will be fine
with 1024 kb / 2048 block alignment.


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


installing FreeBSD 8 on SSDs and UFS2 - partition alignment, block sizes, what does one need to know?

2010-01-12 Thread Dan Naumov
For my upcoming storage system, the OS install is going to be on a
80gb Intel SSD disk and for various reasons, I am now pretty convinced
to stick with UFS2 for the root partition (the actual data pool will
be ZFS using traditional SATA disks). I am probably going to use GPT
partitioning and have the SSD host the swap, boot, root and a few
other partitions. What do I need to know in regards to partition
alignment and filesystem block sizes to get the best performance out
of the Intel SSDs?

Thanks.

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS on top of GELI

2010-01-12 Thread Dan Naumov
2010/1/12 Rafał Jackiewicz free...@o2.pl:
Thanks, could you do the same, but using 2 .eli vdevs mirrorred
together in a zfs mirror?

- Sincerely,
Dan Naumov

 Hi,

 Proc: Intell Atom 330 (2x1.6Ghz) - 1 package(s) x 2 core(s) x 2 HTT threads
 Chipset: Intel 82945G
 Sys: 8.0-RELEASE FreeBSD 8.0-RELEASE #0
 empty file: /boot/loader.conf
 Hdd:
   ad4: 953869MB Seagate ST31000533CS SC15 at ata2-master SATA150
   ad6: 953869MB Seagate ST31000533CS SC15 at ata3-master SATA150
 Geli:
   geli init -s 4096 -K /etc/keys/ad4s2.key /dev/ad4s2
   geli init -s 4096 -K /etc/keys/ad6s2.key /dev/ad6s2


 Results:
 

 *** single drive                        write MB/s      read  MB/s
 eli.journal.ufs2                        23              14
 eli.zfs                         19              36


 *** mirror                              write MB/s      read  MB/s
 mirror.eli.journal.ufs2 23              16
 eli.zfs                         31              40
 zfs                                     83              79


 *** degraded mirror             write MB/s      read MB/s
 mirror.eli.journal.ufs2 16              9
 eli.zfs                         56              40
 zfs                                     86              71

 

Thanks a lot for your numbers, the relevant part for me was this:

*** mirror  write MB/s  read  MB/s
eli.zfs 31  40
zfs 83  79

*** degraded mirror write MB/s  read MB/s
eli.zfs 56  40
zfs 86  71

31 mb/s writes and 40 mb/s reads is something that I guess I could
potentially live with. I am guessing the main problem of stacking ZFS
on top of geli like this is the fact that writing to a mirror requires
double the CPU use, because we have to encrypt all written data twice
(once to each disk) instead of encrypting first and then writing the
encrypted data to 2 disks as would be the case if we had crypto
sitting on top of ZFS instead of ZFS sitting on top of crypto.

I now have to reevaluate my planned use of an SSD though, I was
planning to use a 40gb partition on an Intel 80GB X25-M G2 as a
dedicated L2ARC device for a ZFS mirror of 2 x 2tb disks. However
these numbers make it quite obvious that I would already be
CPU-starved at 40-50mb/s throughput on the encrypted ZFS mirror, so
adding an l2arc SSD, while improving latency, would do really nothing
for actual disk read speeds, considering the l2arc itself would too,
have to sit on top of a GELI device.

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


a question on ZFS boot/root in 8.0-RELEASE

2010-01-11 Thread Dan Naumov
Hello list.

My concern is this: I really really like freebsd-update and want to
continue using it. Freebsd-update however, assumes that no part of
your base system has been compiled by hand, it's intended to be used
to update from official binaries to other official binaries. I am also
gathering (from things I've read so far) that you HAVE to build a
custom loader if you want to boot off a ZFS mirror or raidz... but
what about a non-redundant ZFS pool as system root in 8.0-RELEASE? Can
I have a full ZFS FreeBSD installation on a non-redundant ZFS pool and
have the system boot off it without having to compile anything
manually with the existing binaries provided on the 8.0 install DVD?

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


bin/115406: [patch] gpt(8) GPT MBR hangs award BIOS on boot

2010-01-11 Thread Dan Naumov
I have a few questions about this PR:
http://www.freebsd.org/cgi/query-pr.cgi?pr=115406cat=bin

1) Is this bug now officially fixed as of 8.0-RELEASE? Ie, can I
expect to set up a completely GPT-based system using an Intel
D945GCLF2 board and not have the installation crap out on me later?

2) The very last entry into the PR states the following:
The problem has been addressed in gart(8) and gpt(8) is obsolete, so
no follow-up is to be expected at this time. Close the PR to reflect
this.

What exactly is gart and where do I find it's manpage,
http://www.freebsd.org/cgi/man.cgi comes up with nothing? Also, does
this mean that GPT is _NOT_ in fact fixed regarding this bug?

Thanks.

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


configuring and ssh tunneling xorg from a headless FreeBSD machine

2009-06-29 Thread Dan Naumov
Hello list.

I have the following setup: a Windows Vista x64 SP1 machine (my
primary desktop) and a FreeBSD 7.2/amd64 running on a home NAS system
that's relatively powerful (Intel Atom 330 dualcore, 2gb ram). I would
like to be able to run xorg and a simple desktop on the headless
FreeBSD NAS and be able to interact with it from my Vista machine.
What are the steps I need to take for this? Obviously I need to build
xorg and some sort of a wm (probably gnome2-lite) on the FreeBSD
machine and install an xserver on the Vista machine, but then what?
Any pointers to guides and such are welcome.

Please keep me CCed as I am not subscribed.

- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


sponsoring ZFS development on FreeBSD

2009-06-06 Thread Dan Naumov
Hello

My question is concerning sponsoring the FreeBSD project and ZFS
development in particular. I know I am just a relatively poor person
so I can't contribute much (maybe on the order of 20-30 euro a month),
but I keep seeing FreeBSD core team members keep mentioning we value
donations of all sizes, so what the hell :) Anyways, in the past I
have directed my donations to The FreeBSD Foundation, if I want to
ensure that as much of my money as possible goes directly to benefit
the development of ZFS support on FreeBSD, should I continue donating
to the foundation or should I be sending donations directly to
specific developers?


Sincerely
- Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


pkg_deinstall: delete all packages installed, except for X, Y and Z

2009-06-03 Thread Dan Naumov
Hello list.

I am trying to clean up a system with a LOT of cruft. Is there some
argument I could pass to pkg_deinstall that would result in delete
all packages installed, except for X, Y and Z (and obviously their
dependancies)?

Thanks!

- Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: pkg_deinstall: delete all packages installed, except for X, Y and Z

2009-06-03 Thread Dan Naumov
Thanks a lot, this worked like a charm!

- Dan Naumov


On Wed, Jun 3, 2009 at 12:09 PM, Wojciech
Pucharwoj...@wojtek.tensor.gdynia.pl wrote:
 Hello list.

 I am trying to clean up a system with a LOT of cruft. Is there some
 argument I could pass to pkg_deinstall that would result in delete
 all packages installed, except for X, Y and Z (and obviously their
 dependancies)?

 just do

 pkg_info |cut -f 1 -d   /tmp/pkglist
 edit pkglist and delete lines X, Y and Z

 do

 pkg_delete `cat /tmp/pkglist`
 rm /tmp/pkglist

 ignore errors about package can't be deleted because X, Y or Z requires it.
 it's exactly what you want.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org