://mail.opensolaris.org/pipermail/opensolaris-help/2009-November/015824.html
--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/lib/gam_server | grep port_create
[458] | 134589544| 0|FUNC |GLOB |0|UNDEF |port_create
The patch for port_create has never gone upstream however, while gvfs uses
glib's gio, which has backends for inotify, solaris, fam and win32.
--
James Andrewartha
no pricing on the webpage though - does anyone know how it compares
in price to a logzilla?
--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Marvell MV88SX and works very well in
Solaris. (Package SUNWmv88sx).
They're PCI-X SATA cards, the AOC-SASLP-MV8 is a PCIe SAS card and has no
(Open)Solaris driver.
--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
James Lever wrote:
On 07/07/2009, at 8:20 PM, James Andrewartha wrote:
Have you tried putting the slog on this controller, either as an SSD or
regular disk? It's supported by the mega_sas driver, x86 and amd64 only.
What exactly are you suggesting here? Configure one disk on this array
it is now.
Are you aware of posix_fadvise(2) and madvise(2)?
--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
James Lever wrote:
We also have a PERC 6/E w/512MB BBWC to test with or fall back to if we
go with a Linux solution.
Have you tried putting the slog on this controller, either as an SSD or
regular disk? It's supported by the mega_sas driver, x86 and amd64 only.
--
James Andrewartha | Sysadmin
it's not supported.
http://www.hardforum.com/showthread.php?t=1397855 has a fair few people
testing it out, but mostly under Windows.
--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
/Articles/323464/
http://thunk.org/tytso/blog/2009/03/15/dont-fear-the-fsync/
http://lwn.net/Articles/323752/ *
http://lwn.net/Articles/322823/ *
* are currently subscriber-only, email me for a free link if you'd like to
read them
[2] http://lwn.net/Articles/283745/
--
James Andrewartha | Sysadmin
/048550.html
--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
write per disc spin)
this has been on my TODO list for ages.. :(
Does the perl script at http://brad.livejournal.com/2116715.html do what you
want?
--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
the recordsize on the first filesystem
before performing the zfs send. This will then be used for all files when
you receive the filesystem. I haven't tested this with recordsize, but I did
with compression and I imagine recordsize (and others) will behave the same way.
--
James Andrewartha
on the sending side are des-
troyed.
--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
backup
RAM, Areca (who formerly specialised in SATA controlers) now do SAS RAID at
reasonable prices, and have Solaris drivers.
--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
/ is promising some SSD benchmarks soon.
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
James Andrewartha wrote:
On Thu, 2008-01-17 at 09:29 -0800, Richard Elling wrote:
You don't say which version of ZFS you are running, but what you
want is the -R option for zfs send. See also the example of send
usage in the zfs(1m) man page.
Sorry, I'm running SXCE nv75. I can't see any
Dave Lowenstein wrote:
Couldn't we move fixing panic the system if it can't find a lun up to
the front of the line? that one really sucks.
That's controlled by the failmode property of the zpool, added in PSARC
2007/567 which was integrated in b77.
--
James Andrewartha
[EMAIL PROTECTED]
cannot receive: destination does not exist
What am I missing here? I can't recv to space, because it exists, but I
can't make it not exist since it's the root filesystem of the pool. Do I
have to send each filesystem individually and rsync up the root fs?
Thanks,
James Andrewartha
page. Ah, it's PSARC/2007/574 and nv77. I'm not convinced it'll
solve my problem (sending the root filesystem of a pool), but I'll
upgrade and give it a shot.
Thanks,
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
19 matches
Mail list logo