Well, actually you've scored a hit on both ideas I had after reading the
question ;)
One more idea though: is it possible to change the disk controller mode in
BIOS i.e. to a generic IDE? Hopefully that might work, even if
sub-optimal...
AFAIK FreeBSD 8.x is limited to "stable" ZFSv15, and "e
This dedup discussion (and my own bad expreience) have also
left me with another grim thought: some time ago sparse-root
zone support was ripped out of OpenSolaris.
Among the published rationales were transition to IPS and the
assumption that most people used them to save on disk space
(notion ab
> You and I seem to have different interprettations of the
> empirical "2x" soft-requirement to make dedup worthwhile.
Well, until recently I had little interpretation for it at all, so your
approach may be better.
I hope that authors of the requirement statement would step
forward and explain
2011-07-12 23:14, Eric Sproul пишет:
So finding drives that keep more space in reserve is key to getting
consistent performance under ZFS.
I think I've read in a number of early SSD reviews
(possibly regarding Intel devices - not certain now)
that the vendor provided some low-level formatting
t
igned to that) - I wonder if I could set an
appropriate ashift for the cache device, and how much
would I lose or gain with that (would ZFS care and/or
optimize somehow?)
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2011-07-14 11:54, Frank Van Damme пишет:
Op 12-07-11 13:40, Jim Klimov schreef:
Even if I batch background RM's so a hundred processes hang
and then they all at once complete in a minute or two.
Hmmm. I only run one rm process at a time. You think running more
processes at the same time
2011-07-14 15:48, Frank Van Damme пишет:
It seems counter-intuitive - you'd say: concurrent disk access makes
things only slower - , but it turns out to be true. I'm deleting a
dozen times faster than before. How completely ridiculous. Thank you :-)
Well, look at it this way: it is not only ab
mode. I am not eager to enter
"yes" 40 times ;)
The way I had this script in practice, I could enter "RM"
once and it worked till the box hung. Even then, a watchdog
script could often have it rebooted without my interaction
so it could continue in the next lifetime ;)
--
D
2011-07-15 11:10, phil.har...@gmail.com пишет:
If you clone zones from a golden image using ZFS cloning, you get
fast, efficient dedup for free. Sparse root always was a horrible hack!
Sounds like a holy war is flaming up ;)
From what I heard, sparse root zones with shared common
system librari
_
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
++
| |
| ?? ???, Jim Klimov |
| ??
r maybe even use a USB-CF card reader thingie.
--
++
||
| Климов Евгений, Jim Klimov |
| технический директор
2011-07-16 20:42, Edward Ned Harvey пишет:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
Well, in terms of mirroring over stripes, if any component of any
stripe
breaks,
the whole half of the mirror is degraded. If another drive
2011-07-17 23:13, Edward Ned Harvey пишет:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
if the OP were so inclined,
he could craft a couple of "striped" pools (300+500) and
then make a ZFS pool over these two.
Act
Hello, some time ago I've seen the existence of development ISOs
of OpenIndiana dubbed "build 151". How close or far is it from the
sol11ex 151a? In particular, regarding ZFS/ZPOOL version and
functionality?
Namely, some people on the list report having problems with their
pools built on zpoolv28
Just my 2c: Is it possible to do an "offline" dedup, kind of like snapshotting?
What I mean in practice, is: we make many Solaris full-root zones. They share a
lot of data as complete files. This is kind of easy to save space - make one
zone as a template, snapshot/clone its dataset, make new zo
Ok, thank you Nils, Wade for the concise replies.
After much reading I agree that the ZFS-development queued features do deserve
a higher ranking on the priority list (pool-shrinking/disk-removal and
user/group quotas would be my favourites), so probably the deduplication tool
I'd need would, i
Is it possible to create a (degraded) zpool with placeholders specified instead
of actual disks (parity or mirrors)? This is possible in linux mdadm ("missing"
keyword), so I kinda hoped this can be done in Solaris, but didn't manage to.
Usecase scenario:
I have a single server (or home worksta
For the sake of curiosity, is it safe to have components of two different ZFS
pools on the same drive, with and without HDD write cache turned on?
How will ZFS itself behave, would it turn on the disk cache if the two imported
pools co-own the drive?
An example is a multi-disk system like mine
Thanks Tomas, I haven't checked yet, but your workaround seems feasible.
I've posted an RFE and referenced your approach as a workaround.
That's nearly what zpool should do under the hood, and perhaps can be done
temporarily with a wrapper script to detect min(physical storage sizes) ;)
//Jim
-
Thanks to all those who helped, even despite the "non-enterprise approach" of
this question ;)
While experimenting I discovered that Solaris /tmp doesn't seem to support
sparse files: "mkfile -n" still creates full-sized files which can either use
up the
swap space, or not fit there. ZFS and UFS
And one more note: while I could offline both "fake drives" in OpenSolaris
tests,
the Solaris 10u6 box refused to offline the second drive since it left the pool
without parity.
{code}
[r...@t2k1 /]# zpool status test
pool: test
state: ONLINE
scrub: none requested
config:
NAME
...and, apparently, I can replace two drives at the similar time (in two
commands), and resilvering goes in parallel:
{code}
[r...@t2k1 /]# zpool status pool
pool: pool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue
pointers to file data (triplets which go as parameters to zdb -R)?
//Thanks in advance, we're expecting a busy weekend ;(
//Jim Klimov
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
"zpool history" has shed a little light. Lots actually.
The sub-dataset in question was indeed created, and at the time ludelete was run
there are some entries along the lines of "zfs destroy -r pond/zones/zonename".
There's no precise details (names, mountpoints) about the destroyed datasets -
an
Hello Mark, Darren,
Thank you guys for suggesting "zpool history", upon which we stumbled before
receiving your comments. Nonetheless, the history results are posted above.
Still no luck trying to dig out the dataset data, so far.
As I get it, there are no (recent) backups which is a poor practi
are
currently limited to 255 bytes. That bug has bit me once last year - so NTFS
files
had to be renamed.
Hope this helps, let us know if it does ;)
//Jim Klimov
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I meant to add that due to the sheer amount of data (and time needed) to copy,
you really don't want to use copying tools which abort on error, such as MS
Explorer.
Normally I'd suggest something like FAR in Windows or Midnight Commander in Unix
to copy over networked connections (CIFS shares), o
True, correction accepted, covering my head with ashes in shame ;)
We do use a custom-built package of rsync-3.0.5 with a number of their standard
contributed patches applied. To be specific, these:
checksum-reading.diff
checksum-updating.diff
detect-renamed.diff
downdate.diff
fileflags.diff
fs
Do you have any older benchmarks on these cards and arrays (in their pre-ZFS
life?) Perhaps this is not a ZFS regression but a hardware config issue?
Perhaps there's some caching (like per-disk write-through) not enabled on the
arrays? As you may know, the ability (and reliability) of such cache
Hmm, scratch that. Maybe.
I did not first get the point that your writes to a filesystem dataset work
quickly.
Perhaps filesystem is (better) cached indeed, i.e. *maybe* zvol writes are
synchronous and zfs writes may be cached and thus async? Try playing around
with relevant dataset attributes.
To tell the truth, I expected zvols to be faster than filesystem datasets. They
seem
to have less overhead without inodes, posix, acls and so on. So I'm puzzled by
test
results.
I'm now considering the dd i/o block size, and it means a lot indeed,
especially if
compared to zvol results with sm
of
these
usually :)
You can simply import these files into a zfs pool by a script like:
# for F in *.zfsshot.gz; do echo "=== $F"; gzcat "$F" | time zfs recv -nFvd
pool; done
Probably better use "zfs recv -nFvd" first (no-write verbose mode) to be
certain
about your write-targets and about overwriting stuff (i.e. "zfs recv -F" would
destroy any newer snapshots, if any - so you can first check which ones, and
possibly clone/rename them first).
// HTH, Jim Klimov
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
You might also want to force ZFS into accepting a faulty root pool:
# zpool set failmode=continue rpool
//Jim
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listi
> I installed opensolaris and setup rpool as my base install on a single 1TB
> drive
If I understand correctly, you have rpool and the data pool configured all as
one
pool?
That's not probably what you'd really want. For one part, the bootable root pool
should all be available to GRUB from a s
One more note,
> For example, if you were to remake the pool (as suggested above for rpool and
> below for raidz data pool) - where would you re-get the original data for
> copying
> over again?
Of course, if you take on with the idea of buying 4 drives and building a
raidz1 vdev
right away, an
After reading many-many threads on ZFS performance today (top of the list in
the
forum, and some chains of references), I applied a bit of tuning to the server.
In particular, I've set the zfs_write_limit_override to 384Mb so my cache is
spooled
to disks more frequently (if streaming lots of w
> Trying to spare myself the expense as this is my home system so budget is
> a constraint.
> What I am trying to avoid is having multiple raidz's because every time I
> have another one I loose a lot of extra space to parity. Much like in raid 5.
There's a common perception which I tend to sh
You might also search for OpenSolaris NAS projects. Some that I've seen
previously
involve nearly the same config you're building - a CF card or USB stick with
the OS
and a number of HDDs in a zfs pool for the data only.
I am not certain which ones I've seen, but you can look for EON, and Pulsar
I did a zpool scrub recently, and while it was running it reported errors and
woed
about restoring from backup. When the scrub is complete, it reports finishing
with
0 errors though. On the next scrub some other errors are reported in different
files.
"iostat -xne" does report a few errors (1 s
Hello tobex,
While the original question may have been answered by posts above, I'm
interested:
when you say "according to zfs list the zvol is 100% full", does it only mean
that it
uses all 20Gb on the pool (like a non-sparse uncompressed file), or does it
also imply
that you can't write int
> If I understand you right it is as you said.
> Here's an example and you can see what happened.
> The sam-fs is filled to only 6% and the zvol ist full.
I'm afraid I was not clear with my question, so I'd elaborate, then.
It remains standing as: during this situation, can you write new data int
Concerning the reservations, here's a snip from "man zfs":
The reservation is kept equal to the volume's logical
size to prevent unexpected behavior for consumers.
Without the reservation, the volume could run out of
space, resulting in undefined
I've hit the problem myself recently, and mounting the filesystem cleared
something in the brains of ZFS and alowed me to snapshot.
http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg00812.html
PS: I'll use Google before asking some questions, a'la (C) Bart Simpson
That's how I found yo
It's good he didn't mail you, now we all know some under-the-hood details via
Googling ;)
Thanks to both of you for this :)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
I've installed SXDE (snv_89) and found that the web console only listens on
https://localhost:6789/ now, and the module for ZFS admin doesn't work.
When I open the link, the left frame lists a stacktrace (below) and the right
frame is plain empty. Any suggestions?
I tried substituting different
We have a test machine installed with a ZFS root (snv_77/x86 and
"rootpol/rootfs" with grub support).
Recently tried to update it to snv_89 which (in Flag Days list) claimed more
support for ZFS boot roots, but the installer disk didn't find any previously
installed operating system to upgrade.
You mean this:
https://www.opensolaris.org/jive/thread.jspa?threadID=46626&tstart=120
Elegant script, I like it, thanks :)
Trying now...
Some patching follows:
-for fs in `zfs list -H | grep "^$ROOTPOOL/$ROOTFS" | awk '{ print $1 };'`
+for fs in `zfs list -H | grep "^$ROOTPOOL/$ROOTFS" | grep -w
Alas, didn't work so far.
Can the problem be that the zfs-root disk is not the first on the controller
(system boots from the grub on the older ufs-root slice), and/or that zfs is
mirrored? And that I have snapshots and a data pool too?
These are the boot disks (SVM mirror with ufs and grub):
No, I did not set that property; not now, not in previous releases.
Nice to see "secure by default" coming to the admin tools as well.
Waiting for SSH to become 127.0.0.1:22 sometime... just kidding ;)
Thanks for the tip!
Any ideas about the stacktrace? - it's still there instead of the web-GUI
I checked - this system has a UFS root. When installed as snv_84 and then LU'd
to snv_89, and when I fiddled with these packages from various other releases,
it had the stacktrace instead of the ZFS admin GUI (or the well-known
"smcwebserver restart" effect for the older packages).
This system
Likewise. Just plain doesn't work.
Not required though, since the command-line is okay and way powerful ;)
And there are some more interesting challenges to work on, so I didn't push
this problem any more yet.
This message posted from opensolaris.org
_
Interesting, we'll try that.
Our server with the problem has been boxed now, so I'll check the solution when
it gets on site.
Thanks ahead, anyway ;)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
501 - 552 of 552 matches
Mail list logo