On 9/23/2010 at 12:38 PM Erik Trimble wrote:
| [snip]
|If you don't really care about ultra-low-power, then
there's
absolutely
|no excuse not to buy a USED server-class machine
which is 1- or 2-
|generations back. They're dirt cheap, readily
available,
| [snip]
=
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of devsk
If dedup is ON and the pool develops a corruption
in a file, I can
never fix it because when I try to copy the correct
file on top of the
corrupt file,
the block hash
I have a USB flash drive which boots up my
opensolaris install. What happens is that whenever
I
move to a different machine,
the root pool is lost because the devids don't
match
with what's in /etc/zfs/zpool.cache and the system
just can't find the rpool.
See defect 4755 or defect
Actually, I figured it has nothing to do with /etc/zfs/zpool.cache. As part of
removing that file within a LiveCD, I was basically importing and exporting the
rpool.
So, the only thing required is an equivalent of 'zpool import -f rpool zpool
export rpool'. I wonder why this can't be
If dedup is ON and the pool develops a corruption in a file, I can never fix it
because when I try to copy the correct file on top of the corrupt file,
the block hash will match with the existing blocks and only reference count
will be updated. The only way to fix it is to delete all
snapshots
I have a USB flash drive which boots up my opensolaris install. What happens is
that whenever I move to a different machine,
the root pool is lost because the devids don't match with what's in
/etc/zfs/zpool.cache and the system just can't find the rpool.
Now, if I boot into a livecd, import
Now, if I boot into a livecd, import that rpool,
mount the FS temporarily and remove
/etc/zfs/zpool.cache, and reboot,
the USB drive boots fine.
Exporting and re-importing the pool should have the
same effect.
but how do I do that on the boot pool?
--
This message posted from
On Mon, Jul 19, 2010 at 9:40 PM, devsk
funt...@yahoo.com wrote:
On Sat, Jun 26, 2010 at 12:20 AM, Ben Miles
merloc...@hotmail.com wrote:
What supporting applications are there on Ubuntu
for RAIDZ?
None. Ubuntu doesn't officially support ZFS.
You can kind of make it work using
I have many core files stuck in snapshots eating up gigs of my disk space. Most
of these are BE's which I don't really want to delete right now.
Is there a way to get rid of them? I know snapshots are RO but can I do some
magic with clones and reclaim my space?
--
This message posted from
Thanks, Michael. That's exactly right.
I think my requirement is: writable snapshots.
And I was wondering if someone knowledgeable here could tell me if I could do
this magically by using clones without creating a tangled mess of branches,
because clones in a way are writable snapshots.
--
On Sat, Jun 26, 2010 at 12:20 AM, Ben Miles
merloc...@hotmail.com wrote:
What supporting applications are there on Ubuntu
for RAIDZ?
None. Ubuntu doesn't officially support ZFS.
You can kind of make it work using the ZFS-FUSE
project. But it's not
stable, nor recommended.
I have
'zpool scrub mypool' comes out clean.
zfs send -Rv myp...@blah | ssh ...
reports IO error. And indeed, 'zpool status -v' shows errors in some files in
an older snapshot.
Repeat scrub without errors and clear the pool. And send now fails on a
different set of files.
How can this happen?
What
This is the fourth time its happened.
I had a clean scrub before starting send 15 minutes ago. And now, I see
permanent errors in one of the files of one of the snapshots. Different file
from last time.
How do I find what's going on? 'dmesg' in guest or kernel does not have any
relevant
Funny thing is that if I enable the snapdir and 'cat' the file, it doesn't say
IO error.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
?
And the numbers? 473164k is not same as 256MB as per the table in that page. If
you can explain the individual numbers and how they add up across 'swap -s',
'swap -l' and 'top -b', that would be great!
-devsk
From: Erik Trimble erik.trim...@oracle.com
Cc: devsk funt
$ swap -s
total: 473164k bytes allocated + 388916k reserved = 862080k used, 6062060k
available
$ swap -l
swapfile devswaplo blocks free
/dev/dsk/c6t0d0s1 215,1 8 12594952 12594952
Can someone please do the math for me here? I am not able to figure the total.
I had an unclean shutdown because of a hang and suddenly my pool is degraded (I
realized something is wrong when python dumped core a couple of times).
This is before I ran scrub:
pool: mypool
state: DEGRADED
status: One or more devices has experienced an error resulting in data
I think both Bob and Thomas have it right. I am using VIrtualbox and just
checked, the host IO is cached on the SATA controller, although I thought I had
it enabled (this is VB-3.2.0).
Let me run this mode for a while and see of this happens again.
--
This message posted from opensolaris.org
Sequential write for large files has no real
difference in speed between
an SSD and a HD.
That's not true. Indilinx based SSDs can write upto 200MB/s sequentially, and
Sandforce based even more. I don't know of any HD that can do that. Most HD are
considered good if they do half of that.
--
$ zfs list -t filesystem
NAME USED AVAIL REFER MOUNTPOINT
datapool 840M 25.5G21K /datapool
datapool/virtualbox 839M 25.5G 839M /virtualbox
mypool 8.83G 6.92G82K /mypool
I wrongly said myr...@dbus_gam_server_race_partly_solved-6pm-may30-2010. I
meant mypool.
This is the send command that failed:
time zfs send -Rv myp...@dbus_gam_server_race_partly_solved-6pm-may30-2010 |
ssh 192.168.0.6 zfs recv -vuF zfs-backup/opensolaris-backup/mypool
--
This message posted
OK, I have no idea what ZFS is smoking...:-)
I was able to send the individual datasets to the backup server.
zfs-backup/opensolaris-backup/mypool 11.5G 197G
82K /zfs-backup/opensolaris-backup/mypool
zfs-backup/opensolaris-backup/mypool/ROOT
Scrub has turned up clean again:
pool: mypool
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer
This actually turned out be a lot of fun! The end of it is that I have a hard
disk partition now which can boot in both physical and virtual world (got rid
of the VDIs finally!). The physical world has outstanding performance but has
ugly graphics (1600x1200 vesa driver with weird DPI and
I had created a virtualbox VM to test out opensolaris. I updated to latest dev
build and set my things up. Tested pools and various configs/commands. Learnt
format/partition etc.
And then, I wanted to move this stuff to a solaris partition on the physical
disk. VB provides physical disk
I think I messed up just a notch!
When I did zfs send|recv, I used the flag -u (because I wanted it to not mount
at that time). But it set the fs property canmount to off for ROOT...YAY!
I booted into livecd, imported the mypool and fixed the mount points and
canmount property. And I am now in
I am getting a strange reset as soon as I say startx from normal user's console
login.
How do I troubleshoot this? Any ideas? I removed the /etc/X11/xorg.conf before
invoking startx because that would have some PCI bus ids in there which won't
be valid in real hardware.
--
This message posted
Looks like the X's vesa driver can only use 1600x1200 resolution and not the
native 1920x1200.
And if I passed -dpi to enforce 96 DPI, it just croaks.
Once -dpi was out, I am inside X with 1600x1200 resolution.
Can anyone tell me how I can get the native 1920x1200 resolution working with
vesa
I have no idea why I posted in zfs discuss...ok, migration...I will post follow
up in help.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I had a pool which I created using zfs-fuse, which is using March code base
(exact version, I don't know; if someone can tell me the command to find the
zpool format version, I would be grateful).
I exported it and now I tried to import it in OpenSolaris, which is running Feb
bits because it
I went through with it and it worked fine. So, I could successfully move my ZFS
device to the beginning of the new disk.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Could this be a future enhancement for ZFS? Like provide 'zfs move fs1/path1
fs2/path2', which will do the needful without really copying anything?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
This is really painful. My source was a backup of my folders which I wanted as
filesystems in the RAIDZ setup. So, I copied the source to the new pool and
wanted to be able to move those folders to different filesystems within the
RAIDZ. But its turning out to be a brand new copy and since its
Is there anything anybody has to advise? Will I be better of copying each
folder into its own FS from source pool? How about removal of the stuff that's
now in this FS? How long will the removal of 770GB data containing 6millions
files take?
Cost1: copy folders into respective FS + remove
are with compression=off. /tmp is RAM.
-devsk
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
35 matches
Mail list logo