Hmmm - I've got a fairly old copy of the zpool cache file (circa July), but
nothing structural has changed in pool since that date. What other data is held
in that file? There have been some filesystem changes, but nothing critical is
in the newer filesystems.
Any particular procedure required
Are there any known issues involving VirtualBox using shared folders
from a ZFS filesystem?
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 b123
+ All that's really worth doing is what we do for others (Lewis Carrol)
Hi All,
I'm not sure what I'm seeing is by design or by misconfiguration. I created a
filesystem tank/zones to hold some zones, then created a specific zone
filesystem tank/zones/basezone. Then built a zone, setting
zonepath=/tank/zones/basezone.
If I zlogin to basezone, and do zfs list, it
On Sep 27, 2009, at 3:19 AM, Paul Archer p...@paularcher.org wrote:
So, after *much* wrangling, I managed to take on of my drives
offline, relabel/repartition it (because I saw that the first sector
was 34, not 256, and realized there could be an alignment issue),
and get it back into the
On 09/27/09 03:05 AM, Joerg Schilling wrote:
BTW: Solaris has tmpfs since late 1987.
Could you fix the Wikipedia article? http://en.wikipedia.org/wiki/TMPFS
it first appeared in SunOS 4.1, released in March 1990
It is a de-facto standard since then as it e.g. helps to reduce compile
On Sep 27, 2009, at 2:34 AM, Chris Murray wrote:
Posting this question again as I originally tagged it onto the end
of a series of longwinded posts of mine where I was having problems
replacing a drive. After dodgy cabling and a few power cuts, I
finally got the new drive resilvered.
On Sep 27, 2009, at 12:05 AM, Joerg Schilling wrote:
Toby Thain t...@telegraphics.com.au wrote:
at least as of RHFC10. I have files in /tmp
going back to Feb 2008 :-). Evidently, quoting Wikipedia,
tmpfs is supported by the Linux kernel from version 2.4 and up.
On Sep 27, 2009, at 10:41, Frank Middleton wrote:
You bet! Provided the compiler doesn't use /var/tmp as IIRC early
versions of gcc once did...
I find using -pipe better:
-pipe
Use pipes rather than temporary files for communication
between the
various stages
Richard Elling richard.ell...@gmail.com wrote:
BTW: Solaris has tmpfs since late 1987.
It is a de-facto standard since then as it e.g. helps to reduce
compile times.
Yep, and before that, there was just an rc script to rm everything in /
tmp.
IIRC, SunOS-3.x did call (cd /tmp; rm
Frank Middleton f.middle...@apogeect.com wrote:
On 09/27/09 03:05 AM, Joerg Schilling wrote:
BTW: Solaris has tmpfs since late 1987.
Could you fix the Wikipedia article? http://en.wikipedia.org/wiki/TMPFS
it first appeared in SunOS 4.1, released in March 1990
It appeared with SunOS-4.0.
Good link - thanks. I'm looking at the details for that one and learning a
little zdb at the same time. I've got a situation perhaps a little different in
that I _do_ have a current copy of the slog in a file with what appears to be
current data.
However, I don't see how to attach the slog
Problem is that while it's back, the performance is horrible. It's
resilvering at about (according to iostat) 3.5MB/sec. And at some point, I
was zeroing out the drive (with 'dd if=/dev/zero of=/dev/dsk/c7d0'), and
iostat showed me that the drive was only writing at around 3.5MB/sec. *And*
it
I knew it would be something simple!! :-)
Now 3.63TB, as expected, and no need to export and import either! Thanks
Richard, that's done the trick.
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 09/27/09 11:25 AM, Joerg Schilling wrote:
Frank Middletonf.middle...@apogeect.com wrote:
Could you fix the Wikipedia article? http://en.wikipedia.org/wiki/TMPFS
it first appeared in SunOS 4.1, released in March 1990
It appeared with SunOS-4.0. The official release was probably Februars
On Sep 27, 2009, at 11:49 AM, Paul Archer p...@paularcher.org wrote:
Problem is that while it's back, the performance is horrible. It's
resilvering at about (according to iostat) 3.5MB/sec. And at some
point, I was zeroing out the drive (with 'dd if=/dev/zero of=/dev/
dsk/c7d0'), and iostat
This is what my /var/adm/messages looks like:
Sep 27 12:46:29 solaria genunix: [ID 403854 kern.notice] assertion failed: ss
== NULL, file: ../../common/fs/zfs/space_map.c, line: 109
Sep 27 12:46:29 solaria unix: [ID 10 kern.notice]
Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice]
My controller, while normally a full RAID controller, has had its BIOS
turned off, so it's acting as a simple SATA controller. Plus, I'm seeing
this same slow performance with dd, not just with ZFS. And I wouldn't think
that write caching would make a difference with using dd (especially
On Sep 27, 2009, at 1:44 PM, Paul Archer p...@paularcher.org wrote:
My controller, while normally a full RAID controller, has had its
BIOS turned off, so it's acting as a simple SATA controller. Plus,
I'm seeing this same slow performance with dd, not just with ZFS.
And I wouldn't think
On Sep 27, 2009, at 8:49 AM, Paul Archer wrote:
Problem is that while it's back, the performance is horrible. It's
resilvering at about (according to iostat) 3.5MB/sec. And at some
point, I was zeroing out the drive (with 'dd if=/dev/zero of=/dev/
dsk/c7d0'), and iostat showed me that the
1:19pm, Richard Elling wrote:
The other thing that's weird is the writes. I am seeing writes in that
3.5MB/sec range during the resilver, *and* I was seeing the same thing
during the dd.
This is from the resilver, but again, the dd was similar. c7d0 is the
device in question:
r/sw/s
On Sun, 27 Sep 2009, dick hoogendijk wrote:
Are there any known issues involving VirtualBox using shared folders from a
ZFS filesystem?
I am not sure what you mean by 'shared folders' but I am using a NFS
mount to access the host ZFS filesystem. It works great. I have less
faith in
On Sun, Sep 27, 2009 at 10:06:16AM -0700, Andrew wrote:
This is what my /var/adm/messages looks like:
Sep 27 12:46:29 solaria genunix: [ID 403854 kern.notice] assertion failed: ss
== NULL, file: ../../common/fs/zfs/space_map.c, line: 109
Sep 27 12:46:29 solaria unix: [ID 10 kern.notice]
To: ZFS Developers.
I know we hate them but an "Are you sure?" may have helped here, and
may be a quicker fix than waiting for 4852783
(just thinking out loud here). Could the zfs command have worked out
c5d0 was a single disk and attaching it to the pool would have been
dumb?
Ryan Hirsch
Dick
I'm 99$ sure I use to do this when I had OpenSolaris as my base OS to
an XP guest (no NFS client - Bob) for my $HOME
Now I use Vista as my base OS because I now work in an MS environment,
so sorry can't check. You having problems?
BTW: Thank goodness for VirtualBox when I want to do
On Sep 27, 2009, at 2:28 PM, Trevor Pretty wrote:
To: ZFS Developers.
I know we hate them but an Are you sure? may have helped here, and
may be a quicker fix than waiting for 4852783 (just thinking out
loud here). Could the zfs command have worked out c5d0 was a single
disk and
Bill Sommerfeld wrote:
On Fri, 2009-09-25 at 14:39 -0600, Lori Alt wrote:
The list of datasets in a root pool should look something like this:
...
rpool/swap
I've had success with putting swap into other pools. I believe others
have, as well.
Yes, that's true.
I have a box with 4 disks. It was my intent to place a mirrored root partition
on 2 disks on different controllers, then use the remaining space and the other
2 disks to create a raid-5 configuration from which to export iscsi luns for
use by other hosts.
The problem im having is that when I
On Sep 27, 2009, at 8:41 PM, Ron Watkins rwa...@gmail.com wrote:
I have a box with 4 disks. It was my intent to place a mirrored root
partition on 2 disks on different controllers, then use the
remaining space and the other 2 disks to create a raid-5
configuration from which to export
My goal is to have a mirrored root on c1t0d0s0/c2t0d0s0, another mirrored app
fs on c1t0d0s1/c2t0d0s1 and then a 3+1 Raid-5 accross
c1t0d0s7/c1t1d0s7/c2t0d0s7/c2t1d0s7.
I want to play with creating ISCSI target luns on the Raid-5 partition, so I am
trying out opensolaris for the first time. In
Ron
That should work it's no real different to SVM.
BTW: I did you mean?
mirrored root on c1t0d0s0/c2t0d0s0
mirrored app on c1t1d0s0/c2t1d0s0
RaidZ accross c1t0d0s7/c1t1d0s7/c2t0d0s7/c2t1d0s7
I would then make slices 0 and 7 the same on all disks using fmthard
(BTW:I would not use 7, I
Yes, you are correct about the layout.
However, I don't appear to be able to control how the root pool is configured
when I install from the live-CD. It either takes:
a) The entire physical disk
b) A slice the same size as the physical disk
c) A smaller slice, but no way to get at the remaining
On Sep 27, 2009, at 10:05 PM, Ron Watkins rwa...@gmail.com wrote:
My goal is to have a mirrored root on c1t0d0s0/c2t0d0s0, another
mirrored app fs on c1t0d0s1/c2t0d0s1 and then a 3+1 Raid-5 accross
c1t0d0s7/c1t1d0s7/c2t0d0s7/c2t1d0s7.
There is no need for the 2 mirrors both on c1t0 and
32 matches
Mail list logo