Re: gvinum gjournal

2009-01-15 Thread Ulf Lilleengen
On Wed, Jan 14, 2009 at 04:23:30PM -0500, Brian McCann wrote:
 Hi all.  I'm cross-posting this since I figure I'll have better luck
 finding someone who's done this before...
 
 I'm building a system that has 4 1.5TB Seagate SATA drives in it.
 I've setup gvinum and made mirrors for my OS partitions, and a raid5
 plex for a big data partition.  I'm trying to get gjournal to run on
 the raid5 volume...but it's doing stuff that isn't expected.  First,
 here's my gvinum config for the array:
 
 ---snip---
 drive e0 device /dev/ad8s1g
 drive e1 device /dev/ad10s1g
 drive e2 device /dev/ad12s1g
 drive e3 device /dev/ad14s1g
 volume array1
   plex org raid5 128k
 sd drive e0
 sd drive e1
 sd drive e2
 sd drive e3
 ---/snip---
 
 Now...according to the handbook. the volume it creates is essentially
 a disk drive.  So...I run the following gjournal commands to make the
 journal, and here's what I get:
 
 ---snip---
 # gjournal label /dev/gvinum/array1
 GEOM_JOURNAL: Journal 4267655417: gvinum/plex/array1.p0 contains data.
 GEOM_JOURNAL: Journal 4267655417: gvinum/plex/array1.p0 contains journal.
 GEOM_JOURNAL: Journal gvinum/plex/array1.p0 clean.
 GEOM_JOURNAL: BIO_FLUSH not supported by gvinum/plex/array1.p0.
 # gjournal list
 Geom name: gjournal 4267655417
 ID: 4267655417
 Providers:
 1. Name: gvinum/plex/array1.p0.journal
Mediasize: 4477282549248 (4.1T)
Sectorsize: 512
Mode: r0w0e0
 Consumers:
 1. Name: gvinum/plex/array1.p0
Mediasize: 4478356291584 (4.1T)
Sectorsize: 512
Mode: r1w1e1
Jend: 4478356291072
Jstart: 4477282549248
Role: Data,Journal
 --/snip---
 
 So...why is it even touching the plex p0?  I figured it would, just
 like on a disk, if I gave it da0, create da0.journal.  Moving on, if I
 try to newfs the journal, which is now
 gvinum/plex/array1.p0.journal, I get:
 
Hi,

It think that it touches it because the .p0 contains the gjournal metadata in
the same way that the volume does, so gjournal attaches to that before the
volume. One problem is that gjournal attaches to the wrong provider, but
it's also silly that the provider is exposed in the first place. A fix for
this is in a newer version of gvinum (as the plex is not exposed) if you're
willing to try.

-- 
Ulf Lilleengen
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: gvinum gjournal

2009-01-15 Thread Brian McCann
On Thu, Jan 15, 2009 at 4:33 AM, Ulf Lilleengen
ulf.lilleen...@gmail.com wrote:
 Hi,

 It think that it touches it because the .p0 contains the gjournal metadata in
 the same way that the volume does, so gjournal attaches to that before the
 volume. One problem is that gjournal attaches to the wrong provider, but
 it's also silly that the provider is exposed in the first place. A fix for
 this is in a newer version of gvinum (as the plex is not exposed) if you're
 willing to try.

 --
 Ulf Lilleengen


At this point, I'm willing to try anything, but preferably something
that's stable since this will be done to at least 15 identical devices
and sent out to various places, so I won't have physical access to the
machines if something were to go wrong.  I looked into graid3 and
booting off of a DOM/Flash card, but since I have 4 drives, that won't
work since graid3 requires 2N+1 drives.  The stuff I've found on
graid5 seems to say that it's all still really experimental and has
some bugs in it still. :(

That said...if you've got it, I'll try it. :)
Thanks,
--Brian

-- 
_-=-_-=-_-=-_-=-_-=-_-=-_-=-_-=-_-=-_-=-_-=-_
Brian McCann

I don't have to take this abuse from you -- I've got hundreds of
people waiting to abuse me.
-- Bill Murray, Ghostbusters
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: gvinum gjournal

2009-01-15 Thread Ulf Lilleengen
On tor, jan 15, 2009 at 07:14:25am -0500, Brian McCann wrote:
 On Thu, Jan 15, 2009 at 4:33 AM, Ulf Lilleengen
 ulf.lilleen...@gmail.com wrote:
  Hi,
 
  It think that it touches it because the .p0 contains the gjournal metadata 
  in
  the same way that the volume does, so gjournal attaches to that before the
  volume. One problem is that gjournal attaches to the wrong provider, but
  it's also silly that the provider is exposed in the first place. A fix for
  this is in a newer version of gvinum (as the plex is not exposed) if you're
  willing to try.
 
  --
  Ulf Lilleengen
 
 
 At this point, I'm willing to try anything, but preferably something
 that's stable since this will be done to at least 15 identical devices
 and sent out to various places, so I won't have physical access to the
 machines if something were to go wrong.  I looked into graid3 and
 booting off of a DOM/Flash card, but since I have 4 drives, that won't
 work since graid3 requires 2N+1 drives.  The stuff I've found on
 graid5 seems to say that it's all still really experimental and has
 some bugs in it still. :(
 
Well, if you have the choice between and graid5 and gvinum I'd might go with
graid5. It's not included in head(yet) but from what I've read on the
mailinglist it seems to work (also included in FreeNAS). But I can't really
say anything as I've not tried it much. If you want to try the gvinum version
in progress (quite stable, although a few knits for advanced usage) can be
found here:

http://svn.freebsd.org/base/projects/gvinum 

Should work on both HEAD and RELENG_7.
-- 
Ulf Lilleengen
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: gvinum gjournal

2009-01-15 Thread Ulf Lilleengen
On Thu, Jan 15, 2009 at 02:22:13PM +0100, Ivan Voras wrote:
 Ulf Lilleengen wrote:
  On Wed, Jan 14, 2009 at 04:23:30PM -0500, Brian McCann wrote:
  Hi all.  I'm cross-posting this since I figure I'll have better luck
  finding someone who's done this before...
 
  I'm building a system that has 4 1.5TB Seagate SATA drives in it.
  I've setup gvinum and made mirrors for my OS partitions, and a raid5
  plex for a big data partition.  I'm trying to get gjournal to run on
  the raid5 volume...but it's doing stuff that isn't expected.  First,
  here's my gvinum config for the array:
 
  ---snip---
  drive e0 device /dev/ad8s1g
  drive e1 device /dev/ad10s1g
  drive e2 device /dev/ad12s1g
  drive e3 device /dev/ad14s1g
  volume array1
plex org raid5 128k
  sd drive e0
  sd drive e1
  sd drive e2
  sd drive e3
  ---/snip---
 
  Now...according to the handbook. the volume it creates is essentially
  a disk drive.  So...I run the following gjournal commands to make the
  journal, and here's what I get:
 
  ---snip---
  # gjournal label /dev/gvinum/array1
  GEOM_JOURNAL: Journal 4267655417: gvinum/plex/array1.p0 contains data.
  GEOM_JOURNAL: Journal 4267655417: gvinum/plex/array1.p0 contains journal.
  GEOM_JOURNAL: Journal gvinum/plex/array1.p0 clean.
  GEOM_JOURNAL: BIO_FLUSH not supported by gvinum/plex/array1.p0.
  # gjournal list
  Geom name: gjournal 4267655417
  ID: 4267655417
  Providers:
  1. Name: gvinum/plex/array1.p0.journal
 Mediasize: 4477282549248 (4.1T)
 Sectorsize: 512
 Mode: r0w0e0
  Consumers:
  1. Name: gvinum/plex/array1.p0
 Mediasize: 4478356291584 (4.1T)
 Sectorsize: 512
 Mode: r1w1e1
 Jend: 4478356291072
 Jstart: 4477282549248
 Role: Data,Journal
  --/snip---
 
  So...why is it even touching the plex p0?  I figured it would, just
  like on a disk, if I gave it da0, create da0.journal.  Moving on, if I
  try to newfs the journal, which is now
  gvinum/plex/array1.p0.journal, I get:
 
  Hi,
  
  It think that it touches it because the .p0 contains the gjournal metadata 
  in
  the same way that the volume does, so gjournal attaches to that before the
  volume. One problem is that gjournal attaches to the wrong provider, but
  it's also silly that the provider is exposed in the first place. A fix for
  this is in a newer version of gvinum (as the plex is not exposed) if you're
  willing to try.
  
 
 A simpler fix is to use the -h - hardcode provider name switch to
 the gjournal label command (see the man page).
 
Oh, nice feature. I recommend this then :)

-- 
Ulf Lilleengen
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


gvinum gjournal

2009-01-14 Thread Brian McCann
Hi all.  I'm cross-posting this since I figure I'll have better luck
finding someone who's done this before...

I'm building a system that has 4 1.5TB Seagate SATA drives in it.
I've setup gvinum and made mirrors for my OS partitions, and a raid5
plex for a big data partition.  I'm trying to get gjournal to run on
the raid5 volume...but it's doing stuff that isn't expected.  First,
here's my gvinum config for the array:

---snip---
drive e0 device /dev/ad8s1g
drive e1 device /dev/ad10s1g
drive e2 device /dev/ad12s1g
drive e3 device /dev/ad14s1g
volume array1
  plex org raid5 128k
sd drive e0
sd drive e1
sd drive e2
sd drive e3
---/snip---

Now...according to the handbook. the volume it creates is essentially
a disk drive.  So...I run the following gjournal commands to make the
journal, and here's what I get:

---snip---
# gjournal label /dev/gvinum/array1
GEOM_JOURNAL: Journal 4267655417: gvinum/plex/array1.p0 contains data.
GEOM_JOURNAL: Journal 4267655417: gvinum/plex/array1.p0 contains journal.
GEOM_JOURNAL: Journal gvinum/plex/array1.p0 clean.
GEOM_JOURNAL: BIO_FLUSH not supported by gvinum/plex/array1.p0.
# gjournal list
Geom name: gjournal 4267655417
ID: 4267655417
Providers:
1. Name: gvinum/plex/array1.p0.journal
   Mediasize: 4477282549248 (4.1T)
   Sectorsize: 512
   Mode: r0w0e0
Consumers:
1. Name: gvinum/plex/array1.p0
   Mediasize: 4478356291584 (4.1T)
   Sectorsize: 512
   Mode: r1w1e1
   Jend: 4478356291072
   Jstart: 4477282549248
   Role: Data,Journal
--/snip---

So...why is it even touching the plex p0?  I figured it would, just
like on a disk, if I gave it da0, create da0.journal.  Moving on, if I
try to newfs the journal, which is now
gvinum/plex/array1.p0.journal, I get:

---snip---
# newfs -J /dev/gvinum/plex/array1.p0.journal
/dev/gvinum/plex/array1.p0.journal: 4269869.4MB (8744692476 sectors) block size
16384, fragment size 2048
using 23236 cylinder groups of 183.77MB, 11761 blks, 23552 inodes.
newfs: can't read old UFS1 superblock: end of file from block device:
No such file or directory
---/snip---

Followed by a panic and reboot:

---snip---
Fatal trap 12: page fault while in kernel mode
cpuid = 0; apic id = 00
fault virtual address   = 0x0
fault code  = supervisor read, page not present
instruction pointer = 0x20:0xc0d8d440
stack pointer   = 0x28:0xd4e25c44
frame pointer   = 0x28:0xd4e25cf4
code segment= base 0x0, limit 0xf, type 0x1b
= DPL 0, pres 1, def32 1, gran 1
processor eflags= interrupt enabled, resume, IOPL = 0
current process = 47 (gv_p array1.p0)
trap number = 12
panic: page fault
cpuid = 0
Uptime: 14m38s
Cannot dump. No dump device defined.
Automatic reboot in 15 seconds - press a key on the console to abort
---/snip---

Next...I destroyed/cleared/stoped/etc the journal to start fresh, made
a new one...it created the same thing
(gvinum/plex/array1.p0.journal)...I then rebooted, loaded the gjournal
module, and I now see gvinum/array1.journal as the provider, and the
provider inside plex is gone. I then run my newfs (newfs -J
/dev/gvinum/array1.journal) , and I get

---snip---
Fatal trap 12: page fault while in kernel mode
cpuid = 0; apic id = 00
fault virtual address   = 0x1c
fault code  = supervisor read, page not present
instruction pointer = 0x20:0xc0d8eec5
stack pointer   = 0x28:0xd4e2ecbc
frame pointer   = 0x28:0xd4e2ecf4
code segment= base 0x0, limit 0xf, type 0x1b
= DPL 0, pres 1, def32 1, gran 1
processor eflags= interrupt enabled, resume, IOPL = 0
current process = 50 (gv_v array1)
trap number = 12
panic: page fault
cpuid = 0
Uptime: 8m18s
Cannot dump. No dump device defined.
Automatic reboot in 15 seconds - press a key on the console to abort

---/snip---

Does anyone have any ideas here?  I assumed gjournal would play nice
with any file system.  But clearly not.  After I clear the journal off
of /dev/gvinum/array1, I can do a newfs on it (/dev/gvinum/array1)
without the journal fine...so that tests that the RAID5 is ok.  Anyone
havve any ideas?

Thanks!
--Brian

-- 
_-=-_-=-_-=-_-=-_-=-_-=-_-=-_-=-_-=-_-=-_-=-_
Brian McCann

I don't have to take this abuse from you -- I've got hundreds of
people waiting to abuse me.
-- Bill Murray, Ghostbusters
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: gvinum gjournal

2009-01-14 Thread Brian McCann
On Wed, Jan 14, 2009 at 4:23 PM, Brian McCann bjmcc...@gmail.com wrote:

 Does anyone have any ideas here?  I assumed gjournal would play nice
 with any file system.  But clearly not.  After I clear the journal off
 of /dev/gvinum/array1, I can do a newfs on it (/dev/gvinum/array1)
 without the journal fine...so that tests that the RAID5 is ok.  Anyone
 havve any ideas?

 Thanks!
 --Brian


I also just got the idea to try turning off write caching on the ata
controller...no help.  Just thought I'd drop that out there if that
clues in on something.

--Brian


-- 
_-=-_-=-_-=-_-=-_-=-_-=-_-=-_-=-_-=-_-=-_-=-_
Brian McCann

I don't have to take this abuse from you -- I've got hundreds of
people waiting to abuse me.
-- Bill Murray, Ghostbusters
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org