Re: Xorg hangs with drmwtq in 7.2-RELEASE

2009-05-31 Thread David Johnson
I haven't heard anything on this in three weeks. I filed a bug report, but no 
acceptance yet. Does this imply that there is no intention to fix this 
problem? What is happening with this? Am I even posting to the right list? I'm 
completely in the dark here.

-- 
David Johnson
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS booting without partitions (was: ZFS boot on zfs mirror)

2009-05-31 Thread Adam McDougall
I encountered the same symptoms today on both a 32bit and 64bit
brand new install using gptzfsboot.  It works for me when I use
a copy of loader from an 8-current box with zfs support compiled in.
I haven't looked into it much yet but it might help you.  If you
want, you can try the loader I am using from:
http://www.egr.msu.edu/~mcdouga9/loader

On Thu, May 28, 2009 at 10:41:42PM +0200, Lorenzo Perone wrote:

  
  On 28.05.2009, at 21:46, Mickael MAILLOT wrote:
  
   hi,
  
   did you erase gmirror meta ? (on the last sector)
   with: gmirror clear ad6
  
  ohps I had forgotten that. just did it (in single user mode),
  but it  didn't help :( Shall I repeat any of the other steps
  after clearing gmirror meta?
  
  thanx a lot for your help...
  
  Lorenzo
  
   2009/5/28 Lorenzo Perone lopez.on.the.li...@yellowspace.net:
   Hi,
  
   I tried hard... but without success ;(
  
   the result is, when choosing the disk with the zfs boot
   sectors in it (in my case F5, which goes to ad6), the kernel
   is not found. the console shows:
  
   forth not found
   definitions not found
   only not found
   (the above repeated several times)
  
   can't load 'kernel'
  
   and I get thrown to the loader prompt.
   lsdev does not show any ZFS devices.
  
   Strange thing: if I boot from the other disk, F1, which is my
   ad4 containing the normal ufs system I used to make up the other
   one, and escape to the loader prompt, lsdev actually sees the
   zpool which is on the other disk, and shows:
   zfs0: tank
  
   I tried booting with boot zfs:tank or zfs:tank:/boot/kernel/kernel,
   but there I get the panic: free: guard1 fail message.
   (would boot zfs:tank:/boot/kernel/kernel be correct, anyways?)
  
   Sure I'm doing something wrong, but what...? Is it a problem that
   the pool is made out of the second disk only (ad6)?
  
   Here are my details (note: latest stable and biosdisk.c merged
   with changes shown in r185095. no problems in buildworld/kernel):
   ()
  
  
  ___
  freebsd-stable@freebsd.org mailing list
  http://lists.freebsd.org/mailman/listinfo/freebsd-stable
  To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
  
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS NAS configuration question

2009-05-31 Thread Aristedes Maniatis


On 31/05/2009, at 4:41 AM, Dan Naumov wrote:


To top that
off, even when/if you do it right, not your entire disk goes to ZFS
anyway, because you still do need a swap and a /boot to be non-ZFS, so
you will have to install ZFS onto a slice and not the entire disk and
even SUN discourages to do that.


ZFS on root is still pretty new to FreeBSD, and until it gets ironed  
out and all the sysinstall tools support it nicely, it isn't hard to  
use a small UFS slice to get things going during boot. And there is  
nothing wrong with putting ZFS onto a slice rather than the entire  
disk: that is a very common approach.


http://www.ish.com.au/solutions/articles/freebsdzfs

Ari Maniatis



--
ish
http://www.ish.com.au
Level 1, 30 Wilson Street Newtown 2042 Australia
phone +61 2 9550 5001   fax +61 2 9550 4001
GPG fingerprint CBFB 84B4 738D 4E87 5E5C  5EFA EF6A 7D2E 3E49 102A


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS MFC heads down

2009-05-31 Thread Henri Hennebert

Kip Macy wrote:

Please try applying this change to your tree and let me know.


I patch, I reboot 2 times without problem. I keep you posted is
I encounter a new crash.

Thanks

Henri


Thanks,
Kip

http://svn.freebsd.org/viewvc/base?view=revisionrevision=193110


On Sat, May 30, 2009 at 2:11 AM, Henri Hennebert h...@restart.be wrote:

Kip Macy wrote:

On Wed, May 20, 2009 at 2:59 PM, Kip Macy km...@freebsd.org wrote:

I will be MFC'ing the newer ZFS support some time this afternoon. Both
world and kernel will need to be re-built. Existing pools will
continue to work without upgrade.


If you choose to upgrade a pool to take advantage of new features you
will no longer be able to use it with sources prior to today. 'zfs
send/recv' is not expected to inter-operate between different pool
versions.


The MFC went in r192498. Please let me know if you have any problems.


I get a Fatal trap 12: page fault while in kernel mode
at shutdown. the core.txt is http://verbier.restart.be/xfer/core.txt.61

Thanks for you work

Henri



Thanks,
Kip
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org








___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS booting without partitions (was: ZFS boot on zfs mirror)

2009-05-31 Thread Enrico M.
On Thursday 28 May 2009 15:58:04 Lorenzo Perone wrote:
 Hi,

 I tried hard... but without success ;(

 the result is, when choosing the disk with the zfs boot
 sectors in it (in my case F5, which goes to ad6), the kernel
 is not found. the console shows:

 forth not found
 definitions not found
 only not found
 (the above repeated several times)

 can't load 'kernel'

 and I get thrown to the loader prompt.
 lsdev does not show any ZFS devices.

 Strange thing: if I boot from the other disk, F1, which is my
 ad4 containing the normal ufs system I used to make up the other
 one, and escape to the loader prompt, lsdev actually sees the
 zpool which is on the other disk, and shows:
 zfs0: tank

 I tried booting with boot zfs:tank or zfs:tank:/boot/kernel/kernel,
 but there I get the panic: free: guard1 fail message.
 (would boot zfs:tank:/boot/kernel/kernel be correct, anyways?)

 Sure I'm doing something wrong, but what...? Is it a problem that
 the pool is made out of the second disk only (ad6)?

 Here are my details (note: latest stable and biosdisk.c merged
 with changes shown in r185095. no problems in buildworld/kernel):

 snip

 Machine: p4 4GHz 4 GB RAM (i386)

 Note: the pool has actually a different name (heidi
 instead of tank, if this can be of any relevance...),
 just using tank here as it's one of the conventions...

 mount (just to show my starting situation)

 /dev/mirror/gm0s1a on / (ufs, local)
 devfs on /dev (devfs, local)
 /dev/mirror/gm0s1e on /tmp (ufs, local, soft-updates)
 /dev/mirror/gm0s1f on /usr (ufs, local, soft-updates)
 /dev/mirror/gm0s1d on /var (ufs, local, soft-updates)

 gmirror status
NameStatus  Components
 mirror/gm0  DEGRADED  ad4
 (ad6 used to be the second disk...)

 echo 'LOADER_ZFS_SUPPORT=yes'  /etc/make.conf

 cd /usr/src
 make buildworld  make buildkernel KERNCONF=HEIDI
 make installkernel KERNCONF=HEIDI
 mergemaster
 make installworld
 shutdown -r now

 dd if=/dev/zero of=/dev/ad6 bs=512 count=32

 zpool create tank ad6
 zfs create tank/usr
 zfs create tank/var
 zfs create -V 4gb tank/swap
 zfs set org.freebsd:swap=on tank/swap
 zpool set bootfs=tank tank

 rsync -avx / /tank
 rsync -avx /usr/ /tank/usr
 rsync -avx /var/ /tank/var
 cd /usr/src
 make installkernel KERNCONF=HEIDI DESTDIR=/tank

 zpool export tank

 dd if=/boot/zfsboot of=/dev/ad6 bs=512 count=1
 dd if=/boot/zfsboot of=/dev/ad6 bs=512 skip=1 seek=1024

 zpool import tank

 zfs set mountpoint=legacy tank
 zfs set mountpoint=/usr tank/usr
 zfs set mountpoint=/var tank/var

 shutdown -r now ...

 at the 'mbr prompt' I pressed F5 (the second disk, ad6)
 .. as written above, loader gets loaded (at this stage
 I suppose it's the stuff dd't after block 1024?),
 but kernel not found.

 /usr/src/sys/i386/conf/HEIDI:
 (among other things...):
 options KVA_PAGES=512

 (/tank)/boot/loader.conf:
 vm.kmem_size=1024M
 vm.kmem_size_max=1024M
 vfs.zfs.arc_max=128M
 vfs.zfs.vdev.cache.size=8M
 vfs.root.mountfrom=zfs:tank

 (/tank)/etc/fstab:
 # Device  Mountpoint  FStype  Options DumpPass#
 tank  /   zfs rw  0   0
 /dev/acd0 /cdrom  cd9660  ro,noauto   0   0

 /snap

 any help is welcome... don't know where to go from here right now.

 BTW: I can't stop thanking the team for the incredible
 pace at which bugs are fixed these days!


 Regards,

 Lorenzo

 On 26.05.2009, at 18:42, George Hartzell wrote:
  Andriy Gapon writes:
  on 26/05/2009 19:21 George Hartzell said the following:
  Dmitry Morozovsky writes:
  On Tue, 26 May 2009, Mickael MAILLOT wrote:
 
  MM Hi,
  MM
  MM i prefere use zfsboot boot sector, an example is better than
  a long talk:
  MM
  MM $ zpool create tank mirror ad4 ad6
  MM $ zpool export tank
  MM $ dd if=/boot/zfsboot of=/dev/ad4 bs=512 count=1
  MM $ dd if=/boot/zfsboot of=/dev/ad6 bs=512 count=1
  MM $ dd if=/boot/zfsboot of=/dev/ad4 bs=512 skeep=1  seek=1024
  MM $ dd if=/boot/zfsboot of=/dev/ad6 bs=512 skeep=1  seek=1024
 
  s/skeep/skip/ ? ;-)
 
  What is the reason for copying zfsboot one bit at a time, as opposed
  to
 
   dd if=/boot/zfsboot of=/dev/ad4 bs=512 count=2
 
  seek=1024 for the second part? and no 'count=1' for it? :-)
 
  [Just guessing] Apparently the first block of zfsboot is some form
  of MBR and the
  rest is zfs-specific code that goes to magical sector 1024.
 
  Ok, I managed to read the argument to seek as one block, apparently
  my coffee hasn't hit yet.
 
  I'm still confused about the two parts of zfsboot and what's magical
  about seeking to 1024.
 
  g.

I obtained the same result with FreeBSD 7 stable.
I installed first a new system in a scsi disk with ufs
I put LOADER_ZFS_SUPPORT=yes in /etc/make.conf, updated src and made 
buildworld without trouble.
I built a new kernel, installed it and installed new world.
Until here, that's all right

Then, I tried with a PATA hard drive, ad2
I resetted the mbr and the partition table with dd if=/dev/zero 

Problem with graid3 and gjournal

2009-05-31 Thread Rafael Henrique Faria
I'm using a GJournal on top of GRAID3 on top of GPart.

What happens:

If I go to my gjournal mount point e put:

dd if=/dev/zero of=file bs=1k count=10

The system freeze. No more response in the corrent ssh session, the
system don't respond to new ssh connection, and the console got
freezed too.

So, I tried with:

dd if=/dev/random of=file bs=1k count=10

The same thing.

I went to an other machine, created the file with:

dd if=/dev/zero bs=1k count=10 | bzip2 | dd of=file.bz2

The file is created OK.

I send the file to my machine using gjournal, and:

bzcat file.bz2 | dd of=file

The system got freezed again.

In the same server, I have an other partition with gmirror, and
gjournal on top of it.

The file was created OK. Then I tried to copy the file from the
gmirror+gjournal partition to the graid3+gjournal partition, and the
system got freezed again.

Then I belieave that there is a problem with graid3+gjournal.

If I create filed in the graid3+gjournal with the VI, or send any
other file (ie. /etc/rc.conf) to this partition, is OK. No freeze.
Only when I try to create a file with dd.

Where is the problem, with GRAID3, or with GJournal ?

I'll try to remove the GJournal from this partition. But anyone knows
somethig about it?

My configuration:

FreeBSD papillon.cenadigital.com.br 7.2-STABLE FreeBSD 7.2-STABLE #0:
Wed May  6 21:36:59 UTC 2009
paramo...@papillon.cenadigital.com.br:/usr/obj/usr/src/sys/PAPILLON
i386

CPU: Intel Pentium III (598.63-MHz 686-class CPU)
  Origin = GenuineIntel  Id = 0x683  Stepping = 3
real memory  = 201326592 (192 MB)
avail memory = 182697984 (174 MB)

paramo...@papillon paramount # gpart show
=  34  78242909  ad0  GPT  (37G)
34   1281  freebsd-boot  (64K)
   162   20971522  freebsd-ufs  (1.0G)
   20973147864323  freebsd-swap  (384M)
   2883746   20971524  freebsd-ufs  (1.0G)
   4980898   62914565  freebsd-ufs  (3.0G)
  11272354  271052646  freebsd-ufs  (13G)
  38377618  398653257  freebsd-ufs  (19G)

=  34  39865325  ad1  GPT  (19G)
34  398653251  freebsd-ufs  (19G)

=  34  80293181  ad2  GPT  (38G)
34   1281  freebsd-boot  (64K)
   162   20971522  freebsd-ufs  (1.0G)
   20973147864323  freebsd-swap  (384M)
   2883746   20971524  freebsd-ufs  (1.0G)
   4980898   62914565  freebsd-ufs  (3.0G)
  11272354  271052646  freebsd-ufs  (13G)
  38377618  398653257  freebsd-ufs  (19G)
  78242943   20502728  freebsd-swap  (1.0G)


paramo...@papillon paramount # graid3 status
 NameStatus  Components
raid3/gr0inet  COMPLETE  ad0p7
 ad1p1
 ad2p7

paramo...@papillon paramount # gmirror status
  NameStatus  Components
mirror/gm0root  COMPLETE  ad0p2
  ad2p2
 mirror/gm0tmp  COMPLETE  ad0p4
  ad2p4
 mirror/gm0var  COMPLETE  ad0p5
  ad2p5
 mirror/gm0usr  COMPLETE  ad0p6
  ad2p6

kernel: GEOM_JOURNAL: Journal raid3/gr0inet clean.
kernel: GEOM_JOURNAL: BIO_FLUSH not supported by raid3/gr0inet.
GEOM_JOURNAL: Flush cache of raid3/gr0inet: error=19.
GEOM_JOURNAL: Flush cache of raid3/gr0inet: error=19.
GEOM_JOURNAL: Flush cache of raid3/gr0inet: error=19.
GEOM_JOURNAL: Flush cache of raid3/gr0inet: error=19.
GEOM_JOURNAL: Flush cache of raid3/gr0inet: error=19.

I think that GJournal can't be on top of a GRaid3. But GJournal on top
of GMirror is working very good.

Thank's for any help.

-- 
Rafael Henrique da Silva Faria
# Grupo Cena Digital
# (16) 9229-8928
# www.cenadigital.com.br
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Xorg hangs with drmwtq in 7.2-RELEASE

2009-05-31 Thread Robert Noland
On Sun, 2009-05-31 at 00:12 -0700, David Johnson wrote:
 I haven't heard anything on this in three weeks. I filed a bug report, but no 
 acceptance yet. Does this imply that there is no intention to fix this 
 problem? What is happening with this? Am I even posting to the right list? 
 I'm 
 completely in the dark here.

Yes, your in the right place...  I just can't reproduce it still and so
it's problematic to track down.  I reviewed the commit that you pointed
out, but that is the r6/7xx import commit and involves a lot of code.

robert.

-- 
Robert Noland rnol...@freebsd.org
FreeBSD


signature.asc
Description: This is a digitally signed message part


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-31 Thread Ronald Klop
On Fri, 29 May 2009 13:34:57 +0200, Dan Naumov dan.nau...@gmail.com  
wrote:



Now that I have evaluated the numbers and my needs a bit, I am really
confused about what appropriate course of action for me would be.

1) Use ZFS without GELI and hope that zfs-crypto get implemented in
Solaris and ported to FreeBSD soon and that when it does, it won't
come with such a dramatic performance decrease as GELI/ZFS seems to
result in.
2) Go ahead with the original plan of using GELI/ZFS and grind my
teeth at the 24 MB/s read speed off a single disk.


3) Add extra disks. It will speed up reading. One disk extra will about  
double the read speed.


Ronald.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-31 Thread Dan Naumov
I am pretty sure that adding more disks wouldn't solve anything in
this case, only either using a faster CPU or a faster crypto system.
When you are capable of 70 MB/s reads on a single unecrypted disk, but
only 24 MB/s reads off the same disk while encrypted, your disk speed
isn't the problem.

- Dan Naumov



On Sun, May 31, 2009 at 5:29 PM, Ronald Klop
ronald-freeb...@klop.yi.org wrote:
 On Fri, 29 May 2009 13:34:57 +0200, Dan Naumov dan.nau...@gmail.com wrote:

 Now that I have evaluated the numbers and my needs a bit, I am really
 confused about what appropriate course of action for me would be.

 1) Use ZFS without GELI and hope that zfs-crypto get implemented in
 Solaris and ported to FreeBSD soon and that when it does, it won't
 come with such a dramatic performance decrease as GELI/ZFS seems to
 result in.
 2) Go ahead with the original plan of using GELI/ZFS and grind my
 teeth at the 24 MB/s read speed off a single disk.

 3) Add extra disks. It will speed up reading. One disk extra will about
 double the read speed.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-31 Thread Ulrich Spörlein
On Fri, 29.05.2009 at 12:47:38 +0200, Morgan Wesström wrote:
 You can benchmark the encryption subsytem only, like this:
 
 # kldload geom_zero
 # geli onetime -s 4096 -l 256 gzero
 # sysctl kern.geom.zero.clear=0
 # dd if=/dev/gzero.eli of=/dev/null bs=1M count=512
 
 512+0 records in
 512+0 records out
 536870912 bytes transferred in 11.861871 secs (45260222 bytes/sec)
 
 The benchmark will use 256-bit AES and the numbers are from my Core2 Duo
 Celeron E1200 1,6GHz. My old trusty Pentium III 933MHz performs at
 13MB/s on that test. Both machines are recompiled with CPUTYPE=core2 and
 CPUTYPE=pentium3 respectively but unfortunately I have no benchmarks on
 how they perform without the CPU optimizations.

Hi Morgan,

thanks for the nice benchmarking trick. I tried this on two ~7.2
systems:

CPU: Intel Pentium III (996.77-MHz 686-class CPU)
- 14.3MB/s

CPU: Intel(R) Pentium(R) 4 CPU 2.80GHz (2793.01-MHz 686-class CPU)
- 47.5MB/s

Reading a big file from the pool of this P4 results in 27.6MB/s netto
transfer rate (single 7200 rpm SATA disk).

I would be *very* interested in numbers from the dual core Atom, both
with 2 CPUs and with 1 active core only. I think that having dual core
is a must for this setup, so you can use 2 GELI threads and have the ZFS
threads on top of that to spread the load.

Cheers,
Ulrich Spörlein
-- 
http://www.dubistterrorist.de/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-31 Thread Morgan Wesström
 Hi Morgan,
 
 thanks for the nice benchmarking trick. I tried this on two ~7.2
 systems:
 
 CPU: Intel Pentium III (996.77-MHz 686-class CPU)
 - 14.3MB/s
 
 CPU: Intel(R) Pentium(R) 4 CPU 2.80GHz (2793.01-MHz 686-class CPU)
 - 47.5MB/s
 
 Reading a big file from the pool of this P4 results in 27.6MB/s netto
 transfer rate (single 7200 rpm SATA disk).
 
 I would be *very* interested in numbers from the dual core Atom, both
 with 2 CPUs and with 1 active core only. I think that having dual core
 is a must for this setup, so you can use 2 GELI threads and have the ZFS
 threads on top of that to spread the load.
 
 Cheers,
 Ulrich Spörlein

Credit to pjd@ actually. Picked up the trick myself from freebsd-geom
some time ago :-)
http://lists.freebsd.org/pipermail/freebsd-geom/2007-July/002498.html

My Eee PC with a single core N270 is being repaired atm, it suffered a
bad BIOS flash so I can't help you with benchmarks until it's back. I
don't have access to another Atom CPU unfortunately.

/Morgan
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-31 Thread Ulrich Spörlein
On Fri, 29.05.2009 at 11:19:44 +0300, Dan Naumov wrote:
 Also, free free to criticize my planned filesystem layout for the
 first disk of this system, the idea behind /mnt/sysbackup is to take a
 snapshot of the FreeBSD installation and it's settings before doing
 potentially hazardous things like upgrading to a new -RELEASE:
 
 ad1s1 (freebsd system slice)
   ad1s1a =  128bit Blowfish ad1s1a.eli 4GB swap
   ad1s1b 128GB ufs2+s /
   ad1s1c 128GB ufs2+s noauto /mnt/sysbackup
 
 ad1s2 =  128bit Blowfish ad1s2.eli
   zpool
   /home
   /mnt/data1

Hi Dan,

everybody has different needs, but what exactly are you doing with 128GB
of / ? What I did is the following:

2GB CF card + CF to ATA adapter (today, I would use 2x8GB USB sticks,
CF2ATA adapters suck, but then again, which Mobo has internal USB ports?)

Filesystem 1024-blocks  UsedAvail Capacity  Mounted on
/dev/ad0a   507630139740   32728030%/
/dev/ad0d  1453102   12922964455897%/usr
/dev/md025367816   233368 0%/tmp

/usr is quite crowded, but I just need to clean up some ports again.
/var, /usr/src, /home, /usr/obj, /usr/ports are all on the GELI+ZFS
pool. If /usr turns out to be to small, I can also move /usr/local
there. That way booting and single user involves trusty old UFS only.

I also do regular dumps from the UFS filesystems to the ZFS tank, but
there's really no sacred data under / or /usr that I would miss if the
system crashed (all configuration changes are tracked using mercurial).

Anyway, my point is to use the full disks for GELI+ZFS whenever
possible. This makes it more easy to replace faulty disks or grow ZFS
pools. The FreeBSD base system, I would put somewhere else.

Cheers,
Ulrich Spörlein
-- 
http://www.dubistterrorist.de/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-31 Thread Dan Naumov
Hi

Since you are suggesting 2 x 8GB USB for a root partition, what is
your experience with read/write speed and lifetime expectation of
modern USB sticks under FreeBSD and why 2 of them, GEOM mirror?

- Dan Naumov



 Hi Dan,

 everybody has different needs, but what exactly are you doing with 128GB
 of / ? What I did is the following:

 2GB CF card + CF to ATA adapter (today, I would use 2x8GB USB sticks,
 CF2ATA adapters suck, but then again, which Mobo has internal USB ports?)

 Filesystem             1024-blocks      Used    Avail Capacity  Mounted on
 /dev/ad0a                   507630    139740   327280    30%    /
 /dev/ad0d                  1453102   1292296    44558    97%    /usr
 /dev/md0                    253678        16   233368     0%    /tmp

 /usr is quite crowded, but I just need to clean up some ports again.
 /var, /usr/src, /home, /usr/obj, /usr/ports are all on the GELI+ZFS
 pool. If /usr turns out to be to small, I can also move /usr/local
 there. That way booting and single user involves trusty old UFS only.

 I also do regular dumps from the UFS filesystems to the ZFS tank, but
 there's really no sacred data under / or /usr that I would miss if the
 system crashed (all configuration changes are tracked using mercurial).

 Anyway, my point is to use the full disks for GELI+ZFS whenever
 possible. This makes it more easy to replace faulty disks or grow ZFS
 pools. The FreeBSD base system, I would put somewhere else.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-31 Thread Pertti Kosunen

Ulrich Spörlein wrote:

2GB CF card + CF to ATA adapter (today, I would use 2x8GB USB sticks,
CF2ATA adapters suck, but then again, which Mobo has internal USB ports?)


Many has internal USB header.

http://www.logicsupply.com/products/afap_082usb
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-31 Thread Freddie Cash
On Sun, May 31, 2009 at 9:05 AM, Ulrich Spörlein u...@spoerlein.net wrote:
 everybody has different needs, but what exactly are you doing with 128GB
 of / ? What I did is the following:

 2GB CF card + CF to ATA adapter (today, I would use 2x8GB USB sticks,
 CF2ATA adapters suck, but then again, which Mobo has internal USB ports?)

You can get CF-to-SATA adapters.  We've used CF-to-IDE quite
successfully in a pair of storage server.  We have a couple of the
SATA adapters on order to test with as our new motherboards only have
1 IDE controller, and doing mirroring across master/slave of the same
channel sucks.

 /usr is quite crowded, but I just need to clean up some ports again.
 /var, /usr/src, /home, /usr/obj, /usr/ports are all on the GELI+ZFS
 pool. If /usr turns out to be to small, I can also move /usr/local
 there. That way booting and single user involves trusty old UFS only.

That's what we do as well, but with /usr/local on ZFS, leaving just /
and /usr on UFS.

-- 
Freddie Cash
fjwc...@gmail.com
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


buildworld fails with WITHOUT_CDDL=yes in src.conf

2009-05-31 Thread Pavel Greenberg
Hello everybody!
After today's source update I have a problem when doing make buildworld:

cc -O2 -fno-strict-aliasing -pipe -march=pentium4 -DLOADER_NFS_SUPPORT -
DBOOT_FORTH -I/usr/src/sys/boot/i386/loader/../../ficl -I/usr/src/sys/boot/
i386/loader/../../ficl/i386 -DLOADER_GZIP_SUPPORT -DLOADER_GPT_SUPPORT -I/usr/
src/sys/boot/i386/loader/../../common -I. -Wall -I/usr/src/sys/boot/i386/
loader/.. -I/usr/src/sys/boot/i386/loader/../btx/lib -ffreestanding -
mpreferred-stack-boundary=2  -mno-mmx -mno-3dnow -mno-sse -mno-sse2 -mno-
sse3  -c /usr/src/sys/boot/i386/loader/../../common/interp_forth.c
make: don't know how to make /usr/obj/usr/src/tmp/usr/lib/libzfs.a. Stop
*** Error code 2

Stop in /usr/src/sys/boot/i386.
*** Error code 1

Stop in /usr/src/sys/boot.
*** Error code 1

Stop in /usr/src/sys.
*** Error code 1

Stop in /usr/src.
*** Error code 1

Stop in /usr/src.
*** Error code 1

Stop in /usr/src.

In my src.conf I have options
WITHOUT_CDDL=   true
WITHOUT_ZFS=true
because I don't use ZFS, my desktop haven't enought resources for it and I 
want not to build it. When I updated my OS some weeks ago with the same 
src.conf process ended OK.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: buildworld fails with WITHOUT_CDDL=yes in src.conf

2009-05-31 Thread Daniel O'Connor
On Mon, 1 Jun 2009, Pavel Greenberg wrote:
 Hello everybody!
 After today's source update I have a problem when doing make
 buildworld:

 cc -O2 -fno-strict-aliasing -pipe -march=pentium4
 -DLOADER_NFS_SUPPORT - DBOOT_FORTH
 -I/usr/src/sys/boot/i386/loader/../../ficl -I/usr/src/sys/boot/
 i386/loader/../../ficl/i386 -DLOADER_GZIP_SUPPORT
 -DLOADER_GPT_SUPPORT -I/usr/ src/sys/boot/i386/loader/../../common
 -I. -Wall -I/usr/src/sys/boot/i386/ loader/..
 -I/usr/src/sys/boot/i386/loader/../btx/lib -ffreestanding -
 mpreferred-stack-boundary=2  -mno-mmx -mno-3dnow -mno-sse -mno-sse2
 -mno- sse3  -c
 /usr/src/sys/boot/i386/loader/../../common/interp_forth.c make: don't
 know how to make /usr/obj/usr/src/tmp/usr/lib/libzfs.a. Stop ***
 Error code 2

 Stop in /usr/src/sys/boot/i386.
 *** Error code 1

 Stop in /usr/src/sys/boot.
 *** Error code 1

 Stop in /usr/src/sys.
 *** Error code 1

 Stop in /usr/src.
 *** Error code 1

 Stop in /usr/src.
 *** Error code 1

 Stop in /usr/src.

 In my src.conf I have options
 WITHOUT_CDDL=   true
 WITHOUT_ZFS=true
 because I don't use ZFS, my desktop haven't enought resources for it
 and I want not to build it. When I updated my OS some weeks ago with
 the same src.conf process ended OK.

While the above IS a bug it should be pointed out that unless you 
actually load the ZFS kld it won't use any memory on your system.

-- 
Daniel O'Connor software and network engineer
for Genesis Software - http://www.gsoft.com.au
The nice thing about standards is that there
are so many of them to choose from.
  -- Andrew Tanenbaum
GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C


signature.asc
Description: This is a digitally signed message part.


Re: ZFS on top of GELI / Intel Atom 330 system

2009-05-31 Thread Ulrich Spörlein
On Sun, 31.05.2009 at 19:28:51 +0300, Dan Naumov wrote:
 Hi
 
 Since you are suggesting 2 x 8GB USB for a root partition, what is
 your experience with read/write speed and lifetime expectation of
 modern USB sticks under FreeBSD and why 2 of them, GEOM mirror?

Well, my current setup is using an old 2GB CF card, so read/write speeds
suck (14 and 7 MB/s, respectively, IIRC), but then again, there are not
many actual read/writes on / or /usr for my setup anyway.

The 2x 8GB USB sticks I would of course use to gmirror the setup,
although I have been told that this is rather excessive. Modern flash
media should cope with enough write cycles to get you through a decade.
With /var being on GELI+ZFS this point is mood even more, IMHO.

A recent 8GB Sandisk U3 stick of mine manages to read/write ~25MB/s
(working from memory here), so this is pretty much the maximum USB 2.0
is giving you.

Cheers,
Ulrich Spörlein
-- 
http://www.dubistterrorist.de/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org