Re: ZFSKnownProblems - needs revision?

2009-04-09 Thread Lorenzo Perone


Hi,

in one production case (1), haven't seen panics or deadlocks for a  
long time, yet on another much more powerful machine (2), I could not  
get rid of vm_thread_new: kstack allocation failed, ultimately  
rendering the machine useless pretty fast. This was at least till  
RELENG_7/november (7.1-PRERELEASE), where I decided to stop the zfs  
experiment for now and went back to ufs. trying to understand now if  
7.2 is worth a new try, or if, for that matter, the only reasonable  
wait is until 8.0.


perhaps worth of note, the kstack errors still occurred (albeit after  
more time) with all zpools exported (and system rebooted) but the  
zfs.ko still loaded. only after rebooting without zfs_load=YES the  
server began to work seemlessly for months.


I'm asking myself if/how important the underlying driver/provider  
(mfi, mpt, ad, ciss, etc..) can be in regard to the remaining/ 
recurring problems with zfs.. (since I've seen so different behaviors  
with different machines...)?


(1) Homebrewn Opteron / 2GB RAM / SATA ad / 7.1-PRERELEASE w. usual  
tuning, one zpool on a SATA mirror for backups via rsync of several  
servers
(2) DELL PE 1950 1 Quad-Xeon / 8GB RAM / LSI mpt / 7.1-PRERELEASE w.  
many tunings tried, one zpool on a partition on top of HW RAID 1,  
moderately loaded mailserver box running courier and mysql


Regards,

Lorenzo

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS MFC heads up

2009-05-21 Thread Lorenzo Perone



On 20.05.2009, at 22:41, Kip Macy wrote:


On Wed, May 20, 2009 at 3:11 PM, Mike Tancsa m...@sentex.net wrote:

At 05:59 PM 5/20/2009, Kip Macy wrote:


If you choose to upgrade a pool to take advantage of new features  
you

will no longer be able to use it with sources prior to today. 'zfs
send/recv' is not expected to inter-operate between different pool
versions.





Primarily what was in Pawel's commit to HEAD (see below).

The following changes have also been brought in:

- the recurring deadlock was fixed by deferring vinactive to a  
dedicated thread

-  zfs boot for all types now works
- kmem now goes up to 512GB so arc is now limited by physmem
- the arc now experiences backpressure from the vm (which can be too
much - but allows ZFS to work without any tunables on amd64)


great awesome incredible gorgeous superb fantastic excellent supercool

THANX! :-)

* dancing around with loud music csupping all over the place... *

Lorenzo


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS MFC heads down

2009-05-23 Thread Lorenzo Perone


On 22.05.2009, at 11:45, Pertti Kosunen wrote:


Kirk Strauser wrote:
So far so good here (amd64, Core2 Duo, ICH9 SATA) but I'm too  
chicken to upgrade the on-disk format yet.


Me too, upgraded pool to v13 yesterday and everything still ok.  
Removed also all loader.conf tunables. Many thanks for FreeBSD team.


dito, first machine in production receiving rsync backups, amd64 2GB,  
old 1.2GHz athlon, 400GB single disk pool, works perfectly until now.  
no deadlocks no panics so far (tunables removed), ARC behaves nicely  
too! looking forward to using it on a more recent machine in more  
critical production environments very soon.


thanx to all the team. impeccable work. keep it up.

Lorenzo

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


ZFS boot on zfs mirror

2009-05-25 Thread Lorenzo Perone


Hello to all,

Having licked blood now, and read the news from Kip Macy about


-  zfs boot for all types now works



I was wondering if anyone has some updated tutorial on how to achieve  
a zfs-only bootable FreeBSD with a mirrored zpool. While gmirror is a  
very nice thing, and I suppose it would be relatively easy to build a  
pool on top of a gmirror, I'd much more like the idea of a zfs mirror  
with the checksumming and recovery features zfs has (although I  
remember a post by pjd somewhere telling that gmirror actually has  
this feature too, except for the auto recovery, so given the  
possibility to activate it, it still could be an option...).


Searching around I found this tutorial on how to set up a ZFS bootable  
system, which is mostly straightforward:


http://blogs.freebsdish.org/lulf/2008/12/16/setting-up-a-zfs-only-system/

However it leaves a few questions open... How am I supposed to make a  
zfs mirror out of it? Suppose I have ad4 and ad6, should I repeat the  
exact same gpart-steps for both ad4 and ad6, and then make a zpool  
create data mirror ad4p3 ad6p3? How about swap? I suppose it will be  
on one of the disks? And what if I start with one disk and add the  
second one later with zpool attach?


Any suggestion/links for this (also other strategies if recommended)  
would be very welcome, and I'll be happy to share the results when and  
if I succeed...


BTW, is there any limitation for i386 for the boot/root features? The  
machine which would be free for this experiment is i386 (p4 4Ghz, 4GB  
Ram)


Regards,

Lorenzo


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS boot on zfs mirror

2009-05-26 Thread Lorenzo Perone

Hi All,

Thanx for all the feedback!

Philipp: Your idea is really fine, with manageBE :)
Would surely be nice for a test/development machine,
I'll think about using it... (sounds a bit like
FreeBSD goin' the Nexenta way...)

Mickael: Your example looks much more like what I was
looking  for (and thank god UNIX still is mostly
ASCII so I can follow the link You posted).

But, just as a side question: how much of a risk of
creating an [ugly] race condition is it actually,
to use swap on a zvol?

Yet another question would be, how much is performance
impacted by the zfs overhead (ok, leaving aside that
a swapping system needs ram - wherever the swap is located...)?
But hey, snapshotting swap - isn't THAT funky? ;)

Thanx to all for the feedback, it's great to
be a FreeBSD user all the time!

I'll be trying to set this up ASAP.

Regards,

Lorenzo

On 26.05.2009, at 11:26, Mickael MAILLOT wrote:


Hi,

i prefere use zfsboot boot sector, an example is better than a long  
talk:


$ zpool create tank mirror ad4 ad6
$ zpool export tank
$ dd if=/boot/zfsboot of=/dev/ad4 bs=512 count=1
$ dd if=/boot/zfsboot of=/dev/ad6 bs=512 count=1
$ dd if=/boot/zfsboot of=/dev/ad4 bs=512 skeep=1  seek=1024
$ dd if=/boot/zfsboot of=/dev/ad6 bs=512 skeep=1  seek=1024
$ zpool import tank
$ zpool set bootfs=tank tank
$ zfs set mountpoint=legacy tank

add vfs.root.mountfrom=zfs:tank to your loader.conf
now you can boot on ad4 or ad6

Source:
http://www.waishi.jp/~yosimoto/diary/?date=20080909

2009/5/25 Philipp Wuensche cryx-free...@h3q.com:

Lorenzo Perone wrote:


Hello to all,

Having licked blood now, and read the news from Kip Macy about


-  zfs boot for all types now works



I was wondering if anyone has some updated tutorial on how to  
achieve a

zfs-only bootable FreeBSD with a mirrored zpool.


My own howto and script to do the stuff automated:
http://outpost.h3q.com/patches/manageBE/create-FreeBSD-ZFS-bootfs.txt

But beware, it is meant to use with
http://anonsvn.h3q.com/projects/freebsd-patches/wiki/manageBE
afterwards. But the steps are the same.

Searching around I found this tutorial on how to set up a ZFS  
bootable

system, which is mostly straightforward:

http://blogs.freebsdish.org/lulf/2008/12/16/setting-up-a-zfs-only-system/

However it leaves a few questions open... How am I supposed to  
make a
zfs mirror out of it? Suppose I have ad4 and ad6, should I repeat  
the

exact same gpart-steps for both ad4 and ad6, and then make a zpool
create data mirror ad4p3 ad6p3?


Exactly.


How about swap? I suppose it will be on
one of the disks?


I keep swap in a seperate partition. You could either use two swap
partition, each on one disk or use gmirror to mirror a single swap
partition to be safe from disk crash.


And what if I start with one disk and add the second
one later with zpool attach?


This will work. Just do the same gpart commands on the second disk  
and

use zpool attach.

greetings,
philipp

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org 




___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org 



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: cvsup.de.FreeBSD.org out of sync? (was: make buildkernel KERNCONF=GENERIC fails)

2009-05-26 Thread Lorenzo Perone

On 27.05.2009, at 00:08, Per olof Ljungmark wrote:


Ruben van Staveren wrote:

On 26 May 2009, at 22:38, Christian Walther wrote:

it finished successfully. From my point of view it appears that
cvsup.de.freebsd.org is out of sync.
cvsup.de.freebsd.org is definitely out of sync. I had build errors  
which only went away after switching to a different cvsup server


Same here, a week ago.


same here too... a few mins and hours ago.
with cvsup5.de.freebsd.org as well...

regards,

Lorenzo


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: loader not working with GPT and LOADER_ZFS_SUPPORT

2009-05-27 Thread Lorenzo Perone


On 27.05.2009, at 14:48, Artis Caune wrote:

I tried booting from a disk with GPT scheme, with a /boot/loader  
build
with LOADER_ZFS_SUPPORT=yes in make.conf. I get the following  
error:


panic: free: guard1 fail @ 0x2fd4a6ac from
/usr/src/sys/boot/i386/libi386/biosdisk.c:1053



MFC r185095 fixed this problem!

http://svn.freebsd.org/viewvc/base?view=revisionrevision=185095


Hi, I'm a bit confused:

I can't find this change (rev 185095) in the stable log, yet stable  
has some other
recent changes related to the current posts (in turn commited also to  
head)...


http://svn.freebsd.org/viewvc/base/head/sys/boot/i386/libi386/biosdisk.c?view=log
http://svn.freebsd.org/viewvc/base/stable/7/sys/boot/i386/libi386/biosdisk.c?view=log

maybe I'm misunderstanding how things eventually get ingto -stable,
however, which revision to use now for a peaceful world  boot? :)

I'll go for the -head version for my next try..

Regards,

Lorenzo


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


ZFS booting without partitions (was: ZFS boot on zfs mirror)

2009-05-28 Thread Lorenzo Perone

Hi,

I tried hard... but without success ;(

the result is, when choosing the disk with the zfs boot
sectors in it (in my case F5, which goes to ad6), the kernel
is not found. the console shows:

forth not found
definitions not found
only not found
(the above repeated several times)

can't load 'kernel'

and I get thrown to the loader prompt.
lsdev does not show any ZFS devices.

Strange thing: if I boot from the other disk, F1, which is my
ad4 containing the normal ufs system I used to make up the other
one, and escape to the loader prompt, lsdev actually sees the
zpool which is on the other disk, and shows:
zfs0: tank

I tried booting with boot zfs:tank or zfs:tank:/boot/kernel/kernel,
but there I get the panic: free: guard1 fail message.
(would boot zfs:tank:/boot/kernel/kernel be correct, anyways?)

Sure I'm doing something wrong, but what...? Is it a problem that
the pool is made out of the second disk only (ad6)?

Here are my details (note: latest stable and biosdisk.c merged
with changes shown in r185095. no problems in buildworld/kernel):

snip

Machine: p4 4GHz 4 GB RAM (i386)

Note: the pool has actually a different name (heidi
instead of tank, if this can be of any relevance...),
just using tank here as it's one of the conventions...

mount (just to show my starting situation)

/dev/mirror/gm0s1a on / (ufs, local)
devfs on /dev (devfs, local)
/dev/mirror/gm0s1e on /tmp (ufs, local, soft-updates)
/dev/mirror/gm0s1f on /usr (ufs, local, soft-updates)
/dev/mirror/gm0s1d on /var (ufs, local, soft-updates)

gmirror status
  NameStatus  Components
mirror/gm0  DEGRADED  ad4
(ad6 used to be the second disk...)

echo 'LOADER_ZFS_SUPPORT=yes'  /etc/make.conf

cd /usr/src
make buildworld  make buildkernel KERNCONF=HEIDI
make installkernel KERNCONF=HEIDI
mergemaster
make installworld
shutdown -r now

dd if=/dev/zero of=/dev/ad6 bs=512 count=32

zpool create tank ad6
zfs create tank/usr
zfs create tank/var
zfs create -V 4gb tank/swap
zfs set org.freebsd:swap=on tank/swap
zpool set bootfs=tank tank

rsync -avx / /tank
rsync -avx /usr/ /tank/usr
rsync -avx /var/ /tank/var
cd /usr/src
make installkernel KERNCONF=HEIDI DESTDIR=/tank

zpool export tank

dd if=/boot/zfsboot of=/dev/ad6 bs=512 count=1
dd if=/boot/zfsboot of=/dev/ad6 bs=512 skip=1 seek=1024

zpool import tank

zfs set mountpoint=legacy tank
zfs set mountpoint=/usr tank/usr
zfs set mountpoint=/var tank/var

shutdown -r now ...

at the 'mbr prompt' I pressed F5 (the second disk, ad6)
.. as written above, loader gets loaded (at this stage
I suppose it's the stuff dd't after block 1024?),
but kernel not found.

/usr/src/sys/i386/conf/HEIDI:
(among other things...):
options KVA_PAGES=512

(/tank)/boot/loader.conf:
vm.kmem_size=1024M
vm.kmem_size_max=1024M
vfs.zfs.arc_max=128M
vfs.zfs.vdev.cache.size=8M
vfs.root.mountfrom=zfs:tank

(/tank)/etc/fstab:
# DeviceMountpoint  FStype  Options DumpPass#
tank/   zfs rw  0   0
/dev/acd0   /cdrom  cd9660  ro,noauto   0   0

/snap

any help is welcome... don't know where to go from here right now.

BTW: I can't stop thanking the team for the incredible
pace at which bugs are fixed these days!


Regards,

Lorenzo



On 26.05.2009, at 18:42, George Hartzell wrote:


Andriy Gapon writes:

on 26/05/2009 19:21 George Hartzell said the following:

Dmitry Morozovsky writes:

On Tue, 26 May 2009, Mickael MAILLOT wrote:

MM Hi,
MM
MM i prefere use zfsboot boot sector, an example is better than  
a long talk:

MM
MM $ zpool create tank mirror ad4 ad6
MM $ zpool export tank
MM $ dd if=/boot/zfsboot of=/dev/ad4 bs=512 count=1
MM $ dd if=/boot/zfsboot of=/dev/ad6 bs=512 count=1
MM $ dd if=/boot/zfsboot of=/dev/ad4 bs=512 skeep=1  seek=1024
MM $ dd if=/boot/zfsboot of=/dev/ad6 bs=512 skeep=1  seek=1024

s/skeep/skip/ ? ;-)


What is the reason for copying zfsboot one bit at a time, as opposed
to

 dd if=/boot/zfsboot of=/dev/ad4 bs=512 count=2


seek=1024 for the second part? and no 'count=1' for it? :-)

[Just guessing] Apparently the first block of zfsboot is some form  
of MBR and the

rest is zfs-specific code that goes to magical sector 1024.


Ok, I managed to read the argument to seek as one block, apparently
my coffee hasn't hit yet.

I'm still confused about the two parts of zfsboot and what's magical
about seeking to 1024.

g.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org 



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS booting without partitions (was: ZFS boot on zfs mirror)

2009-05-28 Thread Lorenzo Perone


On 28.05.2009, at 21:46, Mickael MAILLOT wrote:


hi,

did you erase gmirror meta ? (on the last sector)
with: gmirror clear ad6


ohps I had forgotten that. just did it (in single user mode),
but it  didn't help :( Shall I repeat any of the other steps
after clearing gmirror meta?

thanx a lot for your help...

Lorenzo


2009/5/28 Lorenzo Perone lopez.on.the.li...@yellowspace.net:

Hi,

I tried hard... but without success ;(

the result is, when choosing the disk with the zfs boot
sectors in it (in my case F5, which goes to ad6), the kernel
is not found. the console shows:

forth not found
definitions not found
only not found
(the above repeated several times)

can't load 'kernel'

and I get thrown to the loader prompt.
lsdev does not show any ZFS devices.

Strange thing: if I boot from the other disk, F1, which is my
ad4 containing the normal ufs system I used to make up the other
one, and escape to the loader prompt, lsdev actually sees the
zpool which is on the other disk, and shows:
zfs0: tank

I tried booting with boot zfs:tank or zfs:tank:/boot/kernel/kernel,
but there I get the panic: free: guard1 fail message.
(would boot zfs:tank:/boot/kernel/kernel be correct, anyways?)

Sure I'm doing something wrong, but what...? Is it a problem that
the pool is made out of the second disk only (ad6)?

Here are my details (note: latest stable and biosdisk.c merged
with changes shown in r185095. no problems in buildworld/kernel):
()



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: cvsup.de.FreeBSD.org out of sync? (was: make buildkernel KERNCONF=GENERIC fails)

2009-05-29 Thread Lorenzo Perone


On 29.05.2009, at 13:06, Peter Jeremy wrote:

On 2009-May-27 00:42:52 +0200, Lorenzo Perone lopez.on.the.li...@yellowspace.net 
 wrote:

On 27.05.2009, at 00:08, Per olof Ljungmark wrote:


Ruben van Staveren wrote:

On 26 May 2009, at 22:38, Christian Walther wrote:

it finished successfully. From my point of view it appears that
cvsup.de.freebsd.org is out of sync.

cvsup.de.freebsd.org is definitely out of sync. I had build errors
which only went away after switching to a different cvsup server


Same here, a week ago.


same here too... a few mins and hours ago.
with cvsup5.de.freebsd.org as well...


This is a not-uncommon problem with lots of CVSup servers.  edwin@
maintains a statistics page at http://www.mavetju.org/unix/freebsd-mirrors/
that is worth studying - it shows that cvsup.de.freebsd.org is badly
out of sync, though cvsup5 should be OK.


cool work... I'd think of a small script to use that page
for wrapping csup and passing on the best and nearest available
mirror (for that of course, plain text/xml output would be
great)... :)

in any case, at least a $FreeBSD$ tag check would be cool in csup:
sometimes it can have quite destructive consequences to get
an outdated source...

Lorenzo

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS booting without partitions (was: ZFS boot on zfs mirror)

2009-06-01 Thread Lorenzo Perone

On 31.05.2009, at 09:18, Adam McDougall wrote:


I encountered the same symptoms today on both a 32bit and 64bit
brand new install using gptzfsboot.  It works for me when I use
a copy of loader from an 8-current box with zfs support compiled in.
I haven't looked into it much yet but it might help you.  If you
want, you can try the loader I am using from:
http://www.egr.msu.edu/~mcdouga9/loader



Hi, thanx a lot for this hint. Meanwhile, I was almost giving up,
and had a try with ZFS on Root with GPT partitioning, using
gptzfsboot as the bootloader, a UFS root partition as boot
partition (gmirrored to both disks), and the rest (inclusive of a
zvol for swap!) on ZFS. This worked perfectly on the first try.
(if anyone is interested, I can post my commented command
series for that, but it's just a mix of the available tutorials on
the web..).

I'll be glad do give the zfs-only solution a new try.
Had the same impression, that the loader was involved in the
problem, but had no env at hand to build a -CURRENT right
away... (I did, in fact, repeat the dd-steps a zfsboot
bootloader from a recent 8- snapshot iso... with the results
being the same as before...).

Sidenote: I encountered a few panics when using rsync with the
HAX flags enabled (rsync -avxHAX from UFS to ZFS).
I'll try to figure out which one of the flags caused it...
(Hard links, ACLs, or eXtended attributes..).
Never had even the slightest problem with rsync -avx.

Thanx for posting me your loader,  I'll try with this tomorrow night!
(any hint, btw, on why the one in -STABLE seems to be
broken, or whether it has actually been fixed by now?)

Regards,
Lorenzo


(...)


2009/5/28 Lorenzo Perone:

Hi,

I tried hard... but without success ;(

the result is, when choosing the disk with the zfs boot
sectors in it (in my case F5, which goes to ad6), the kernel
is not found. the console shows:

forth not found
definitions not found
only not found
(the above repeated several times)

can't load 'kernel'

and I get thrown to the loader prompt.
lsdev does not show any ZFS devices.

(...)



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS booting without partitions

2009-06-03 Thread Lorenzo Perone

OK, so I've got my next little adventure here to share :-)

... after reading Your posts I was very eager to give the
whole boot-zfs-without-partitions thing a new try.

My starting situation was a ZFS mirror made up, as I wrote,
of two GPT partitions, so my pool looked like:

phaedrus# zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tank ONLINE   0 0 0
  ad6p4   ONLINE   0 0 0
  ad4p4   ONLINE   0 0 0

it was root-mounted and everything was seemingly working
fine, with the machine surviving several bonnie++'s,
sysbenches, and supersmacks concurrently for many
hours (cool!).

So to give it another try my plan was to detach one
partition, clear the gmirror on the UFS boot partition,
make a new pool made out of the free disk and start
the experiment over.

it looked almost like this:

zpool offline tank ad4p4
zpool detach tank ad4p4

gmirror stop gmboot (made out of ad6p2 and ad4p2)
gmirror remove gmboot ad4p2

then I had to reboot cause it wouldn't give up
on the swap partition on the zpool.

That's where the first problem began: it wouldn't boot
anymore... just because I removed a device?
In this case I was stuck at the mountroot: stage.
It wouldn't find the root filesystem on zfs.
(this happened also when physically detaching ad4).

So I booted off a recent 8-CURRENT iso DVD, and although
the mounroot stage is, iirc, at a later stage than
the loader, I smelled it could have something to do
with it and downloaded Adam's CURRENT/ZFS loader, put it in
the appropriate place on my UFS boot partition...

note:
From the CD, I had to import the pool with
zpool import -o altroot=/somewhere tank to avoid having
problems with the datasets being mounted on top
of the 8-fixit environment's /usr ...

Ok, rebooted, and whoops it would boot again in the previous
environment.

So... from there I started over with the creation of
a ZFS-bootonly situation on ad4 (with the intention
of zpool-attaching ad6 later on)

dd if=/dev/zero bs=1m of=/dev/ad4 count=200
(just to be safe, some 'whitespace'..)

zpool create esso da4

zfs snapshot -r t...@night
zfs send -R t...@night | zfs recv -d -F esso
(it did what it had to do - cool new v13 feature BTW!)

zpool export esso

dd if=/boot/zfsboot of=/dev/ad4 bs=512 count=1
dd if=/boot/zfsboot of=/dev/ad4 bs=512 skip=1 seek=1024

zpool import esso

zpool set bootfs=esso esso

the mountpoints (legacy on the poolfs, esso,
and the corresponding ones) had been correctly
copied by the send -R.

Just shortly mounted esso somewhere else,
edited loader.conf and fstab, and put it back
to legacy.

shutdown -r now.

Upon boot, it would wait a while, not present
any F1/F5, and booted into the old environment
(ad6p2 boot partition and then mounted tank as root).

From there, a zfs list or zpool status just showed
the root pool (tank), but the new one (esso) was
not present.

A zpool import showed:

heidegger# zpool import
  pool: esso
id: 865609520845688328
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-3C
config:

essoUNAVAIL  insufficient replicas
  ad4   UNAVAIL  cannot open

zpool import -f esso did not succeed, instead,
looking on the console, I found
ZFS: WARNING: could not open ad4 for writing

I repeated the steps above two more times, making sure
I had wiped everyhing off ad4 before trying... but it
would always come up with that message. The disk is OK,
the cables too, I triple-checked it. Besides, writing
to the disk with other means (such as dd or creating a new
pool) succeeded... (albeit after the usual
sysctl kern.geom.debugflags=16 ...)

well for now I think I'll stick to the GPT + UFS Root +
ZFS Root solution (I'm so happy this works seemlessly,
so this is a big THANX and not a complaint!), but I
thought I'd share the latest hickups...

I won't be getting to that machine for a few days before
restoring to the gpt-ufs-based mirror, so if someone would like
me to provide other info I'll be happy to contribute it.

Big Regards!

Lorenzo


On 01.06.2009, at 19:09, Lorenzo Perone wrote:


On 31.05.2009, at 09:18, Adam McDougall wrote:


I encountered the same symptoms today on both a 32bit and 64bit
brand new install using gptzfsboot.  It works for me when I use
a copy of loader from an 8-current box with zfs support compiled in.
I haven't looked into it much yet but it might help you.  If you
want, you can try the loader I am using from:
http://www.egr.msu.edu/~mcdouga9/loader


Thanx for posting me your loader,  I'll try with this tomorrow night!
(any hint, btw, on why the one in -STABLE seems to be
broken, or whether it has actually been fixed by now?)



___
freebsd-stable@freebsd.org mailing list
http

ZFS list -t snapshot USAGE column

2009-06-09 Thread Lorenzo Perone

Hi there,

just wondering, since the ZFS v13 update (to be precise, 7.2-STABLE  
FreeBSD 7.2-STABLE #11: Wed Jun  3 23:11:29 CEST 2009) why the USAGE  
column in a zfs list -t snapshot is not showing anymore the space used  
by the snapshot? I made those snapshots with zfs snapshot -r.


They're almost all showing a USAGE of 0K, albeit there have been  
changes to the dataset since that snapshot.


Regards,

Lorenzo


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: 7.x and multiple IPs in jails

2008-10-28 Thread Lorenzo Perone



Hi, there's a patch by Bjoern A.Zeeb, available at
http://people.freebsd.org/~bz/bz_jail7-20080920-01-at150161.diff

which succeeds and works well with 7.1-PRERELEASE currently.
I had similar issues to solve and patched several hosts
with it, so far with success.

Bjoern has made an excellent work in patching all
relevant parts, so you'll be able to use the stock
rc.d/jail script as well as having an updated manpage
and a jls -v which shows all the IPs while preserving
compatibility with scripts making assumptions on
the usual jls output.

Please see the freebsd-jail mailing list archives of
the last weeks and months for more info.

I hope very much that these patches will be included
officially in RELENG_7 soon.

Regards,

Lorenzo




On 28.10.2008, at 07:32, Charles Sprickman wrote:


Hello all,

I've been searching around and have come up with no current  
discussions on this issue.  I'll keep it brief:


In 7.0 or 7.1 is there any provision to have multiple IP addresses  
in a jail?


I'm stumped on this, as I just started a new hosting project that  
needs a few jails.  At least one of those requires multiple IPs,  
which is something I never really even realized was not supported.   
What puzzles me more is that before I decided to host this stuff  
myself, I was shopping for FreeBSD VPS providers, and I noticed that  
Verio is actually offering what looks like jails as VPSs, and they  
are offering multiple IPs.  Is this something they hacked up and did  
not contribute back?


Is there any firewall hackery to be had that can at least let me do  
IP based virtual hosts for web hosting?


Thanks,

Charles
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED] 



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: ZFS

2008-10-30 Thread Lorenzo Perone

On 22.10.2008, at 17:38, Freddie Cash wrote:

Personally, we use it in production for a remote backup box using  
ZFS and
Rsync (64-bit FreeBSD 7-Stable from August, 2x dual-core Opteron  
2200s, 8

GB DDR2 RAM, 24x 500 GB SATA disks attached to two 3Ware 9650/9550
controllers as single-disks).  Works beautifully, backing up 80  
FreeBSD
and Debian Linux servers every night, creating snapshots with each  
run.
Restoring files from an arbitrary day is as simple as navigating to  
the
needed .zfs/snapshot/snapname/path/ and scping the file to  
wherever.
And full system restores are as simple as boot livecd, partition/ 
format

disks, run rsync.



So your system doesn't suffer panics and/or deadlocks, or you just
cope with them as collateral damage (which, admitted, is less of
a problem with a logging fs)?

If that's the case, would you share the details about what you're using
on that machine (RELENG_7?, 7_0? HEAD?) and which patches
/knobs You used? I have a similar setup on a host which
backs up way fewer machines and locks up every... 3-9 weeks or so.
That host only has about 2GB ram though.

Thanx and regards,

Lorenzo
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: ZFS

2008-10-31 Thread Lorenzo Perone


Thanx a lot for sharing :)

OK guys that sort of gives some courage to dare a next experiment.
I've got one host which has one AMP application and a mailserver
running, which I'd like to set up with live zfs goodness (I know that
customer would kiss me for having a snapshot history of the mail
accounts...), and your posts give me the spark to try this one out.
It's nothing high-volume, so summing up Freddies and Your post
makes me hope this one won't deadlock too soon.

anyway It's a DELL PE 1950 wirth enough ram and, most of all, a
DRAC so I can powercycle that one remotely if it does the dance :-

Hope to have good news to report after a few weeks of
zfs-entertainment...

Thanx a lot and regards...

Lorenzo

On 31.10.2008, at 03:15, Louis Kowolowski wrote:


On Oct 30, 2008, at 2:55 PM, Lorenzo Perone wrote:

On 22.10.2008, at 17:38, Freddie Cash wrote:
Personally, we use it in production for a remote backup box using  
ZFS and
Rsync (64-bit FreeBSD 7-Stable from August, 2x dual-core Opteron  
2200s, 8

GB DDR2 RAM, 24x 500 GB SATA disks attached to two 3Ware 9650/9550
controllers as single-disks).  Works beautifully, backing up 80  
FreeBSD
and Debian Linux servers every night, creating snapshots with each  
run.
Restoring files from an arbitrary day is as simple as navigating  
to the
needed .zfs/snapshot/snapname/path/ and scping the file to  
wherever.
And full system restores are as simple as boot livecd, partition/ 
format

disks, run rsync.



So your system doesn't suffer panics and/or deadlocks, or you just
cope with them as collateral damage (which, admitted, is less of
a problem with a logging fs)?

If that's the case, would you share the details about what you're  
using

on that machine (RELENG_7?, 7_0? HEAD?) and which patches
/knobs You used? I have a similar setup on a host which
backs up way fewer machines and locks up every... 3-9 weeks or so.
That host only has about 2GB ram though.


I have a system which is sort of similar in production at work.
I have the following tunables (for ZFS) set:
zfs_load=YES
vm.kmem_size_max=1024M
vm.kmem_size=1024M
vfs.zfs.arc_min=16M
vfs.zfs.arc_max=384M

[EMAIL PROTECTED] lkowolowski 76 ]$ uname -a
FreeBSD release.pgp.com 7.1-PRERELEASE FreeBSD 7.1-PRERELEASE #0:  
Wed Sep  3 12:18:57 PDT 2008 [EMAIL PROTECTED]:/usr/obj/usr/ 
src/sys/GENERIC  amd64

[EMAIL PROTECTED] lkowolowski 77 ]$

This box has 2G of RAM, and 8.5T in ZFS spread across 8 RAID1  
mirrors in an EonStore Fiber array (direct attach).


It's been rock solid and stores all of our build collateral.

--
Louis Kowolowski[EMAIL PROTECTED]
Cryptomonkeys:  http://www.cryptomonkeys.com/~louisk

Making life more interesting for people since 1977

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED] 



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: ZFS

2008-10-31 Thread Lorenzo Perone

I use ZFS since 7.0-RELEASE. I'm currently using latest stable.
Ok the load is not as a production one, as the box is used as a home  
server (NAS), but the hardware is limited too (only 512MB of RAM,  
mono-core A64 3200+, motherborad integrated sata controler).
I tried to stress the filesystem a bit with multiple simultaneous  
rsyncs. No glitches. The only failures was when swap was on a zvol  
instead of the system drive. Even with more ram, it regularely ended  
in panics or deadlocks (most of the time, deadlocks) under high  
load.


Not sure of anything here, but you might want to try with non-zfs  
swap - on another drive(s) or dedicated slices ?


Yep, I think I'm going to use a separate slice
for the pool, mounting into the respective jails
only the needed filesystems:

mypool/mail into /jails/mail/maildataroot
mypool/db into /jails/web/mysql-bup-slave

(or sort of)

and then use frequent snapshots for mypool/mail
(even hourly or so), and for the database,
a few times per day mysql-backup-slave.sh stop,
zfs snapshot mypool/db, mysql-backup-slave.sh start..

the mysql slave snapshotting is really a goodness
which I've used on a SunOS with zfs and really
rocks. So I never shutdown the master, the slave
goes down only a few seconds, and the database
filesystem is consistent and synced.

In the current case, I think it is not only a feature
but also a must: _If_ the host deadlocks and
mysql fails to sync, at least I have a working
snapshot of the data. I wouldn't put the master itself
on zfs for now, but if all goes well for a while,
why not.

BTW: while sync does not work anymore in a deadlock situation,
I've seen that fsync mostly still does.
So something like find /var/db/mysql -type f -exec fsync {} \;
can save your files if the db is running on UFS..

Thanx  Regards!


Lorenzo


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: ZFS crashes on heavy threaded environment

2008-11-18 Thread Lorenzo Perone


For what's worth it, I have similar problems on a comparable system  
(amd64/8GB,

7.1-PRERELEASE #3: Sun Nov 16 13:39:43), which I wouldn't call
heavilly threaded yet (as there is only one mysql51 running,
and courier-mta/imap, max 15 users now).
Perhaps worth a note: Bjoern's multi-IP jail patches are applied on
this system.

The setup is so that one zfs filesystem is mounted into a jail handling
only mail (and for that: just the root of the mail files), and a  
script on

the main host rotates snapshots hourly (making a new one, and
destroying the oldest).

After about 8-24 hours of production:

- mysqld is stuck in sbwait state;
- messages start filling up with
  kernel: vm_thread_new: kstack allocation failed
- almost any attempt to fork a process fails with
  Cannot allocate memory.

No panic so far, at least since I've introduced  
vfs.zfs.prefetch_disable=1.

Before that, I experienced several panics upon shutdown.

If I still have an open shell, I can send around some -TERMs and
-KILLs and halfway get back control; after that, if I zfs umount -a
kernel memory usage drastically drops down, and I can resume
the services. However, not for long. After about 1-2 hrs of production
it starts whining again in the messages about kstack allocation failed,
and soon thereafter it all repeats. Only rebooting gives back another
12-24hrs of operation.

What I've tracked down so far:
- zfs destroy'ing old snapshots definitively makes those failures
pop up earlier
- I've been collecting some data shortly around the memory
problems, which I post below.

Since this is a production machine (I know, I shoudn't - but hey,
you made us lick blood and now we ended up wanting more! So,
yes, I confirm, you definitively _are_ evil! ;)), I'm almost
ready to move that back to UFS.

But if it can be useful for debugging, I would be willing to set up a  
zabbix
agent or such to track whichever values could be useful over time for  
a day or two.

If on the other hand these bugs (leaks, or whatever) are likely to
be solved in the recent commit, I'll just move back to UFS until
they're ported to -STABLE.

Here follows some data about memory usage (strangely, I never
saw this even halfway reaching 1.5 GB, but it's really almost
voodoo to me so I leave the analysis up to others):

TEXT=`kldstat | tr a-f A-F | awk 'BEGIN {print ibase=16}; NR  1  
{print $4}' | bc | awk '{a+=$1}; END {print a}'`

DATA=`vmstat -m | sed 's/K//' | awk '{a+=$3}; END {print a*1024}'`
TOTAL=`echo $DATA $TEXT | awk '{print $1+$2}'`

TEXT=13102280, 12.4953 MB
DATA=470022144, 448.248 MB
TOTAL=483124424, 460.743 MB

vmstat -m | grep vnodes
kern.maxvnodes: 10
kern.minvnodes: 25000
vfs.freevnodes: 2380
vfs.wantfreevnodes: 25000
vfs.numvnodes: 43982

As said, the box has 8 GB of RAM, the following loader.conf,
and at the time of the lockups there were about 5GB free
userland memory available.

my loader.conf:
vm.kmem_size=1536M
vm.kmem_size_max=1536M
vfs.zfs.arc_min=512M
vfs.zfs.arc_max=768M
vfs.zfs.prefetch_disable=1

as for the filesystem, I only changed the recordsize and
the mountpoint, the rest is default:

[horkheimer:lopez] root# zfs get all hkpool/mail
NAME PROPERTY   VALUE  SOURCE
hkpool/mail  type   filesystem -
hkpool/mail  creation   Fri Oct 31 13:28 2008  -
hkpool/mail  used   5.50G  -
hkpool/mail  available  386G   -
hkpool/mail  referenced 4.33G  -
hkpool/mail  compressratio  1.05x  -
hkpool/mail  mountedyes-
hkpool/mail  quota  none   default
hkpool/mail  reservationnone   default
hkpool/mail  recordsize 4K local
hkpool/mail  mountpoint /jails/mail/mail   local
hkpool/mail  sharenfs   offdefault
hkpool/mail  checksum   on default
hkpool/mail  compressionon local
hkpool/mail  atime  on default
hkpool/mail  deviceson default
hkpool/mail  exec   on default
hkpool/mail  setuid on default
hkpool/mail  readonly   offdefault
hkpool/mail  jailed offlocal
hkpool/mail  snapdirhidden default
hkpool/mail  aclmodegroupmask  default
hkpool/mail  aclinherit secure default
hkpool/mail  canmount   on default
hkpool/mail  shareiscsi offdefault
hkpool/mail  xattr  offtemporary
hkpool/mail  copies 1  default

the pool is using a partition on a hardware RAID1:

[horkheimer:lopez] root# zpool status
  pool: hkpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
hkpool  ONLINE

Re: MFC ZFS: when?

2008-11-28 Thread Lorenzo Perone


On 22.11.2008, at 00:58, Zaphod Beeblebrox wrote:

In several of the recent ZFS posts, multiple people have asked when  
this

will be MFC'd to 7.x.  This query has been studiously ignored as other
chatter about whatever ZFS issue is discussed.

So in a post with no other bug report or discussion content to  
distract us,

when is it intended that ZFS be MFC'd to 7.x?


While I'd seconded update info a month ago, I think it is no more
inappropriate. Work is actively ongoing (if you follow -current)
and now it's time to take out that old (or new) box and help
debugging all possible scenarios on CURRENT before crying after
the next kmap_too_small or panic. If I understand correctly,
the issues arising with large de/allocations of memory in
kernel space is a tricky buisiness which needs careful and
thorough testing, tuning and thinking...
Afaik, even solaris hasn't ironed out all the potential problems,
e.g. if you read this article and the linked bugdatabase entries...:
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache

So let's really rather help (if possible) with
a -current install, or at least not take time with
tedious requests :)

Sincere regards to PJD and the whole development
core team, as FreeBSD is really keeping up with the
fast tech hype - but with style.


Lorenzo


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Panic on 8-STABLE in mpt(4) on a DELL PowerEdge R300

2010-02-25 Thread Lorenzo Perone


Hello,

I just got a PowerEdge R300, which netbooted fine with 7.2-STABLE 
(FreeBSD 7.2-STABLE #6: Wed Dec  2 01:25:52 CET 2009), but is dropping 
me into a panic right at boot with 8.0-STABLE (just built).


This is what I'm getting (please forgive the lazyness of not 
transcribing everything..):


http://lorenzo.yellowspace.net/R300_mpt_panic.gif

An excerpt:

Fatal trap 12: page fault while in kernel mode

current process = 8 (mpt_raid0)
Stopped at xpt_rescan+0x14:movq(%rsi),%rdx

After just rebuilding the kernel with debug symbols, DDB and KDB, and 
booting with boot -d, I'm dropped into kdb but I cannot do anything 
there, at least not via the vKVM of the DRAC (I would have liked to 
trace, but it won't work..)


The controller is the SAS 6i/R.

If I can provide any other details please let me know.

Thanx a lot for taking your time,

Regards,

Lorenzo


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Panic on 8-STABLE in mpt(4) on a DELL PowerEdge R300

2010-02-25 Thread Lorenzo Perone

On 25.02.10 22:11, Lorenzo Perone wrote:


I just got a PowerEdge R300, which netbooted fine with 7.2-STABLE
(FreeBSD 7.2-STABLE #6: Wed Dec 2 01:25:52 CET 2009), but is dropping me
into a panic right at boot with 8.0-STABLE (just built).

This is what I'm getting (please forgive the lazyness of not
transcribing everything..):

http://lorenzo.yellowspace.net/R300_mpt_panic.gif

An excerpt:

Fatal trap 12: page fault while in kernel mode

current process = 8 (mpt_raid0)
Stopped at xpt_rescan+0x14: movq(%rsi),%rdx

After just rebuilding the kernel with debug symbols, DDB and KDB, and
booting with boot -d, I'm dropped into kdb but I cannot do anything
there, at least not via the vKVM of the DRAC (I would have liked to
trace, but it won't work..)

The controller is the SAS 6i/R.

If I can provide any other details please let me know.

Thanx a lot for taking your time,



A follow up - I recompiled the kernel with mpt from head (svn checkout 
svn://svn.freebsd.org/base/head/sys/dev/mpt) and the problem still exists.


Let me know if I should post a pr.
I'm also available for testing patches.

Regards,

Lorenzo

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Panic on 8-STABLE in mpt(4) on a DELL PowerEdge R300

2010-02-26 Thread Lorenzo Perone

COOL! THANKS a LOT Alexander!

Can't believe it. You post a panic at 11pm and get a patch at 1pm next 
day...? You must be crazy! ;)


Works for me. I patched against 8/stable. I'll be testing the machine a 
bit more. But for now, no panics!


I guess the patch should be committed soon, also because it's really 
happening quite at beginning, without a RAC/ILO you're locked out pretty 
fast, and mpt is used on many DELL/HP setups.


while true ; do echo Thank You ; done

Lorenzo

On 26.02.10 13:25, Alexander Motin wrote:

John J. Rushford wrote:

I'm running into the same problem, mpt(4) panic on FreeBSD 8-STABLE.

I'm running FreeBSD 8.0-STABLE, the current kernel was cvsup'd and built
@ January 14th, 2010.  I cvsup'd tonight, 2/25/2010, and built a new
kernel.  Attached is the panic when I tried to boot into single user
mode, I was able to boot up on the old kernel built on January 14th.

Fatal trap 12: page fault while in kernel mode
cpuid = 0; apic id = 00
fault virtual address= 0x10
fault code= supervisor read data, page not present
instruction pointer= 0x20:0x8019c4bd
stack pointer= 0x28:0xff80e81d5ba0
frame pointer= 0x28:0xff80e81d5bd0
code segment= base 0x0, limit 0xf, type 0x1b
= DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags= interrupt enabled, resume, IOPL = 0
current process= 6 (mpt_raid0)
trap number= 12
panic: page fault


Attached patch should fix the problem.




___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Panic on 8-STABLE in pfctl with options VIMAGE on a DELL PowerEdge R300 (bge)

2010-02-26 Thread Lorenzo Perone


Hello,

Just encountered a panic when starting pf (/etc/rc.d/pf start) on a 
FreeBSD benjamin 8.0-STABLE


uname -a

FreeBSD 8.0-STABLE #0: Fri Feb 26 18:33:44 UTC 2010 
r...@benjamin:/usr/obj/usr/src/sys/BYTESATWORK_R8_INTEL_DEBUG  amd64


the system is a Dell PowerEdge R300 with bge interfaces, 16 GB RAM 
(dmesg attached).


Panic and trace remote console screenshots:

http://lorenzo.yellowspace.net/R300_pfctl_panic.gif
http://lorenzo.yellowspace.net/R300_pfctl_panic_trace.gif

Excerpt transcript:

panic:

Fatal trap 12: page fault while in kernel mode
current process = 1302
Stopped at pfil_head_get+0x41 movq 0x28(%rcx),%rdx

trace:

pfil_head_get() at pfil_head_get+0x41
pfioctl() at pfioctl+0x3351
devfs_ioctl_f() at devfs_ioctl_f+0x71
kern_ioctl() at kern_ioctl+0xe4
ioctl() at ioctl+0xed
syscall() at syscall+0x1e7
Xfast_syscall() at Xfast_syscall+0xe1

While I was just planning to experiment with VIMAGE, and it is not 
required for production (I'm aware of the message of it being 
experimental...), I thought it might be useful to report it. Please send 
me a note if I should file a pr.


The panic does not occur with the same kernel compiled without options 
VIMAGE.


Note that the dmesg is from the system booted with the kernel without 
VIMAGE, that's why it doesn't contain the warning.


big Regards to all the team,

Lorenzo

Copyright (c) 1992-2010 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 8.0-STABLE #0: Fri Feb 26 18:33:44 UTC 2010
r...@benjamin:/usr/obj/usr/src/sys/BYTESATWORK_R8_INTEL_NO_VIMAGE_DEBUG 
amd64
Timecounter i8254 frequency 1193182 Hz quality 0
CPU: Intel(R) Xeon(R) CPU   X3363  @ 2.83GHz (2833.34-MHz K8-class CPU)
  Origin = GenuineIntel  Id = 0x1067a  Stepping = 10
  
Features=0xbfebfbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE
  
Features2=0x40ce3bdSSE3,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,DCA,SSE4.1,XSAVE
  AMD Features=0x20100800SYSCALL,NX,LM
  AMD Features2=0x1LAHF
  TSC: P-state invariant
real memory  = 17179869184 (16384 MB)
avail memory = 16542048256 (15775 MB)
ACPI APIC Table: DELL   PE_SC3  
FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs
FreeBSD/SMP: 1 package(s) x 4 core(s)
 cpu0 (BSP): APIC ID:  0
 cpu1 (AP): APIC ID:  1
 cpu2 (AP): APIC ID:  2
 cpu3 (AP): APIC ID:  3
ioapic0: Changing APIC ID to 4
ioapic0 Version 2.0 irqs 0-23 on motherboard
kbd1 at kbdmux0
cryptosoft0: software crypto on motherboard
acpi0: DELL PE_SC3 on motherboard
acpi0: [ITHREAD]
acpi0: Power Button (fixed)
Timecounter ACPI-fast frequency 3579545 Hz quality 1000
acpi_timer0: 24-bit timer at 3.579545MHz port 0x808-0x80b on acpi0
acpi_hpet0: High Precision Event Timer iomem 0xfed0-0xfed003ff on acpi0
Timecounter HPET frequency 14318180 Hz quality 900
pcib0: ACPI Host-PCI bridge port 0xcf8-0xcff on acpi0
pci0: ACPI PCI bus on pcib0
pcib1: PCI-PCI bridge at device 2.0 on pci0
pci3: PCI bus on pcib1
pcib2: PCI-PCI bridge at device 3.0 on pci0
pci4: PCI bus on pcib2
pcib3: ACPI PCI-PCI bridge at device 4.0 on pci0
pci5: ACPI PCI bus on pcib3
mpt0: LSILogic SAS/SATA Adapter port 0xec00-0xecff mem 
0xdfcec000-0xdfce,0xdfcf-0xdfcf irq 16 at device 0.0 on pci5
mpt0: [ITHREAD]
mpt0: MPI Version=1.5.18.0
mpt0: Capabilities: ( RAID-0 RAID-1E RAID-1 )
mpt0: 1 Active Volume (2 Max)
mpt0: 2 Hidden Drive Members (14 Max)
pcib4: PCI-PCI bridge at device 5.0 on pci0
pci6: PCI bus on pcib4
pcib5: ACPI PCI-PCI bridge at device 6.0 on pci0
pci7: ACPI PCI bus on pcib5
pcib6: ACPI PCI-PCI bridge at device 7.0 on pci0
pci8: ACPI PCI bus on pcib6
pcib7: PCI-PCI bridge irq 16 at device 28.0 on pci0
pci9: PCI bus on pcib7
pcib8: ACPI PCI-PCI bridge irq 16 at device 28.4 on pci0
pci1: ACPI PCI bus on pcib8
bge0: Broadcom NetXtreme Gigabit Ethernet Controller, ASIC rev. 0x00a200 mem 
0xdfdf-0xdfdf irq 16 at device 0.0 on pci1
miibus0: MII bus on bge0
brgphy0: BCM5722 10/100/1000baseTX PHY PHY 1 on miibus0
brgphy0:  10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT, 
1000baseT-FDX, auto
bge0: Ethernet address: 00:26:b9:50:03:3e
bge0: [FILTER]
pcib9: ACPI PCI-PCI bridge irq 17 at device 28.5 on pci0
pci2: ACPI PCI bus on pcib9
bge1: Broadcom NetXtreme Gigabit Ethernet Controller, ASIC rev. 0x00a200 mem 
0xdfef-0xdfef irq 17 at device 0.0 on pci2
miibus1: MII bus on bge1
brgphy1: BCM5722 10/100/1000baseTX PHY PHY 1 on miibus1
brgphy1:  10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT, 
1000baseT-FDX, auto
bge1: Ethernet address: 00:26:b9:50:03:3f
bge1: [FILTER]
uhci0: Intel 82801I (ICH9) USB controller port 0xcc80-0xcc9f irq 21 at device 
29.0 on pci0
uhci0: [ITHREAD]
usbus0: Intel 82801I (ICH9) USB controller on uhci0
uhci1: Intel 82801I (ICH9) USB controller port 0xcca0-0xccbf irq 

Re: FreeBSD and DELL Perc H200

2011-04-21 Thread Lorenzo Perone



Would have loved to use a stable branch (ie 8.2)  instead of development 
(9.0-CURRENT),
though. Especially for the new DELL servers with Perc H200 providing a snapshot 
with
the changes would be greatly appreciated!
   


Hi Holger,

Go for 8-STABLE. Works fine for me.  No need for CURRENT for mps(4). 
Following some heavy load tests (such as concurrent, subsequent 
buildworlds while bonnie++ing around), I can also state that it is very 
stable. Have it on a DELL PowerEdge R410 with the PERC H200A adapter and 
SAS disks, and it works like a charm. I used gmirror on that, and the 
performance is awesome, provided that You tune the sysctl.conf and add: 
vfs.read_max=128 which makes sustained reads much faster (among the -b 
load default strategy when labeling the mirror).

I don't think there are any other snapshot services still running... so 
(assuming
mps(4) actually works with H200) you need to make a build of stable/8 yourself -
the simplest approach is probably to PXE boot the servers and install by hand...
but that's of course not a trivial thing if you never tried it before.
   
Well, setting up a PXE boot server just for installing one server... :-/


   
It's easy if you have any other FreeBSD machine of the same architecture 
around with 8-STABLE - In fact I also did such a thing once using a 
VirtualBox running on a mac some time ago. If you need a quick setup 
guide tell me I'll send You a few commands.


You can also take one disk out, attach it to a running FreeBSD machine, 
gpart it,
cd /usr/src  make installworld DESTDIR=/mountpoint  make 
installkernel DESTDIR=/mountpoint  make distribution DESTDIR=/mountpoint
Edit the few usual suspects such as at least /mountpoint/etc/fstab and 
boot the system with the disk..




Regards,

Lorenzo
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org