inline
On 07/02/12 15:00, Nico Williams wrote:
On Mon, Jul 2, 2012 at 3:32 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Mon, 2 Jul 2012, Iwan Aucamp wrote:
I'm interested in some more detail on how ZFS intent log behaves for
updated done via a memory mapped file - i.e. will the
Agreed - msync/munmap is the only guarantee.
On 07/ 3/12 08:47 AM, Nico Williams wrote:
On Tue, Jul 3, 2012 at 9:48 AM, James Litchfield
jim.litchfi...@oracle.com wrote:
On 07/02/12 15:00, Nico Williams wrote:
You can't count on any writes to mmap(2)ed files hitting disk until
you msync(2
The value of zfs_arc_min specified in /etc/system must be over 64MB
(0x400).
Otherwise the setting is ignored. The value is in bytes not pages.
Jim
---
n 10/ 6/11 05:19 AM, Frank Van Damme wrote:
Hello,
quick and stupid question: I'm breaking my head over how to tunz
zfs_arc_min on a
a service contract. Doesn't take too long
for that kind of math to blow out any savings whiteboxes may have had.
Worst case, someone goes and buys Dell. :-)
--
James Litchfield | Senior Consultant
Phone: +1 4082237059 | Mobile: +1 4082180790
Oracle Oracle ACS
California
Oracle is
There is a 32-bit and 64-bit version of the file system module
available on x86. Given the quality of the development team, I'd be *very*
surprised if such issues as suggested in your message exist.
Jurgen's comment highlights the major issue - the lack of space to
cache data when in 32-bit
POSIX has a Synchronized I/O Data (and File) Integrity Completion
definition (line 115434 of the Issue 7 (POSIX.1-2008) specification).
What it
says is that writes for a byte range in a file must complete before any
pending
reads for that byte range are satisfied.
It does not say that if you
known issue? I've seen this 5 times over the past few days. I think
these were, for the most part BFUs on top of B107. x86.
# pstack fmd.733
core 'fmd.733' of 733:/usr/lib/fm/fmd/fmd
- lwp# 1 / thread# 1
fe8c3347 libzfs_fini (0, fed9e000, 8047d08,
I believe the answer is in the last email in that thread. hald doesn't offer
the notifications and it's not clear that ZFS can handle them. As is noted,
there are complications with ZFS due to the possibility of multiple disks
comprising a volume, etc. It would be a lot of work to make it work
A nit on the nit...
cat does not use mmap for files = 32K in size. For those files
it's a simple read() into a buffer and write() it out.
Jim
---
Chris Gerhard wrote:
A slight nit.
Using cat(1) to read the file to /dev/null will not actually cause the data
to be read thanks to the magic
After some errors were logged as to a problem with a ZFS file system,
I ran zfs status followed by zfs status -v...
# zpool status
pool: ehome
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore
From an email exchange with a HAL developer...
This comes about because I boot back and forth between Windows
and Solaris and when on the Windows side I have the drive unplugged.
On occasion, I forget to plug it back in before returning to Solaris.
I wonder then, if Solaris should export
of messages with hald.
Further questions will be directed in that direction.
Jim
---
James Litchfield wrote:
Indeed, after rebooting we see the following. You'll have to trust me that
/ehome and /ehome/v1 are the relevant ZFS filesystems. If it makes any
different, this file system had been
I have a zfs pool on a USB hard drive attached to my system.
I had unplugged it and when I reconnect it, zpool import does
not see the pool.
# cd /dev/dsk
# fstyp c3t0d0s0
zfs
When I truss zpool import, it looks everywhere (seemingly) *but*
c3t0d0s0 for the pool...
The relevant portion...
Artem Kachitchkine wrote:
# fstyp c3t0d0s0
zfs
s0? How is this disk labeled? From what I saw, when you put EFI label
on a USB disk, the whole disk device is going to be d0 (without
slice). What do these commands print:
# fstyp /dev/dsk/c3t0d0
unknown_fstyp (no matches)
# fdisk -W -
14 matches
Mail list logo