Hello all,
I have a kind of lame question here: how can I force the system (OI)
to probe all the HDD controllers and disks that it can find, and be
certain that it has searched everywhere for disks?
My remotely supported home-NAS PC was unavailable for a while, and
a friend rebooted it for m
On 2013-03-21 16:24, Ram Chander wrote:
Hi,
Can I know how to configure a SSD to be used for L2arc ? Basically I
want to improve read performance.
The "man zpool" page is quite informative on theory and concepts ;)
If your pool already exists, you can prepare the SSD (partition/slice
it) and:
On 2013-03-20 17:15, Peter Wood wrote:
I'm going to need some help with the crash dumps. I'm not very familiar
with Solaris.
Do I have to enable something to get the crash dumps? Where should I
look for them?
Typically the kernel crash dumps are created as a result of kernel
panic; also they m
On 2013-03-19 22:07, Andrew Gabriel wrote:
The GPT partitioning spec requires the disk to be FDISK
partitioned with just one single FDISK partition of type EFI,
so that tools which predate GPT partitioning will still see
such a GPT disk as fully assigned to FDISK partitions, and
therefore less li
irrelevant on GPT/EFI - no SMI slices there.
On my old home NAS with OpenSolaris I certainly did have
MBR partitions on the rpool intended initially for some
dual-booted OSes, but repurposed as L2ARC and ZIL devices
for the storage pool on other disks, when I played with
that technology. Didn
On 2013-03-16 15:20, Bob Friesenhahn wrote:
On Sat, 16 Mar 2013, Kristoffer Sheather @ CloudCentral wrote:
Well, off the top of my head:
2 x Storage Heads, 4 x 10G, 256G RAM, 2 x Intel E5 CPU's
8 x 60-Bay JBOD's with 60 x 4TB SAS drives
RAIDZ2 stripe over the 8 x JBOD's
That should fit within
ardless of what the boxes' individual power sources can
do. Conveniently, they also allow to do a remote hard-reset of hung
boxes without walking to the server room ;)
My 2c,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@
On 2013-03-15 01:58, Gary Driggs wrote:
On Mar 14, 2013, at 5:55 PM, Jim Klimov wrote:
However, recently the VM "virtual hardware" clocks became way slow.
Does NTP help correct the guest's clock?
Unfortunately no, neither guest NTP, ntpdate or rdate in crontabs,
nor Virt
On 2013-03-11 21:50, Bob Friesenhahn wrote:
On Mon, 11 Mar 2013, Tiernan OToole wrote:
I know this might be the wrong place to ask, but hopefully someone can
point me in the right direction...
I got my hands on a Sun x4200. Its the original one, not the M2, and
has 2 single core Opterons, 4Gb R
en you'd use a lot less space (and you'd not see a
.garbledfilename in the directory during the process).
If you use rsync over network to back up stuff, here's an example of
SMF wrapper for rsyncd, and a config sample to make a snapshot after
completion of the rsync session.
http://wiki.o
Ah, I forgot to mention - ufsdump|ufsrestore was at some time also
a recommended way of such transition ;)
I think it should be aware of all intimacies of the FS, including
sparse files which reportedly may puzzle some other archivers.
Although with any sort of ZFS compression (including lightwei
e life from application
data and non-packaged applications, which might simplify backups, etc.
and you might be able to store these pieces in different pools (i.e.
SSDs for some data and HDDs for other - though most list members would
rightfully argue in favor of L2ARC on the SSDs).
HTH,
//Jim
On 2013-02-21 17:02, John D Groenveld wrote:
# zfs list -t vol
NAME USED AVAIL REFER MOUNTPOINT
rpool/dump4.00G 99.9G 4.00G -
rpool/foo128 66.2M 100G16K -
rpool/swap4.00G 99.9G 4.00G -
# zfs destroy rpool/foo128
cannot destroy 'rpool/foo128': volume is busy
C
On 2013-02-21 16:54, Markus Grundmann wrote:
It's anyone here on the list that's have some tips for me what files are
to modify ? :-)
In my current source tree now is a new property "PROTECTED" available
both for pool- und "zfs"-objects. I have also two functions added to
"get" and "set" the pro
On 2013-02-20 23:49, Markus Grundmann wrote:
add an pool / filesystem property as an additional security layer for
administrators.
Whenever I modify zfs pools or filesystems it's possible to destroy [on
a bad day :-)] my data. A new
property "protected=on|off" in the pool and/or filesystem can h
On 2013-02-19 17:02, Victor Latushkin wrote:
On 2/19/13 6:32 AM, Jim Klimov wrote:
On 2013-02-19 14:24, Konstantin Kuklin wrote:
zfs set canmount=off zroot/var/crash
i can`t do this, because zfs list empty
I'd argue that in your case it might be desirable to evacuate data and
reinstal
On 2013-02-19 14:24, Konstantin Kuklin wrote:
zfs set canmount=off zroot/var/crash
i can`t do this, because zfs list empty
I'd argue that in your case it might be desirable to evacuate data and
reinstall the OS - just to be certain that ZFS on-disk structures on
new installation have no def
On 2013-02-19 12:39, Konstantin Kuklin wrote:
i did`t replace disk, after reboot system not started (zfs installed
as default root system) and i boot from another system(from flash) and
resilvering has auto start and show me warnings with freeze
progress(dead on checking zroot/var/crash )
Well,
Also, adding to my recent post: instead of resilvering, try to run
"zpool scrub" first - it should verify all checksums and repair
whatever it can via redundancy (for metadata - extra copies).
Resilver is similar to scrub, but it has its other goals and
implementation, and might be not so forgivi
usr:<0x2aff>
zroot/var/crash:<0x0>
root@Flash:/root #
how i can delete or drop it fs zroot/var/crash (1m-10m size i didn`t
remember) and mount other zfs points with my data
--
С уважением
Куклин Константин.
Good luck,
//Jim Klimov
___
Hello Cindy,
Are there any plans to preserve the official mailing lists' archives,
or will they go the way of Jive forums and the future digs for bits
of knowledge would rely on alternate mirrors and caches?
I understand that Oracle has some business priorities, but retiring
hardware causes site
On 2013-02-16 21:49, John D Groenveld wrote:
By the way, whatever the error message is when booting, it disapears so
quickly I can't read it, so I am only guessing that this is the reason.
Boot with kernel debugger so you can see the panic.
And that would be so:
1) In the boot loader (GRUB) ed
On 2013-02-12 10:32, Ian Collins wrote:
Ram Chander wrote:
Hi Roy,
You are right. So it looks like re-distribution issue. Initially there
were two Vdev with 24 disks ( disk 0-23 ) for close to year. After
which which we added 24 more disks and created additional vdevs. The
initial vdevs are fi
tter for your cause.
Inspect the "zpool" source to see where it gets its numbers from...
and perhaps make and RTI relevant kstats, if they aren't yet there ;)
On the other hand, I am not certain how Solaris-based kstats interact
or correspond to struc
rites into it - regardless of absence or
presence (and type) of compression on the original dataset.
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 2013-02-08 22:47, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
Maybe this isn't exactly what you need, but maybe:
for fs in `zfs list -H -o name` ; do echo $fs ; zfs get
reservation,refreservation,usedbyrefreservation $fs ; done
What is the sacramental purpose of such co
On 2013-02-04 17:10, Karl Wagner wrote:
OK then, I guess my next question would be what's the best way to
"undedupe" the data I have?
Would it work for me to zfs send/receive on the same pool (with dedup
off), deleting the old datasets once they have been 'copied'? I think I
remember reading som
On 2013-02-04 15:52, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
I noticed that sometimes I had terrible rates with < 10MB/sec. Then
later it rose up to < 70MB/sec.
Are you talking about scrub rates for the complete scrub? Because if you sit
there and watch it, from minute
On 2013-01-24 11:06, Darren J Moffat wrote:
On 01/24/13 00:04, Matthew Ahrens wrote:
On Tue, Jan 22, 2013 at 5:29 AM, Darren J Moffat
mailto:darr...@opensolaris.org>> wrote:
Preallocated ZVOLs - for swap/dump.
Darren, good to hear about the cool stuff in S11.
Yes, thanks, Darren :)
J
On 2013-01-23 09:41, casper@oracle.com wrote:
Yes and no: the system reserves a lot of additional memory (Solaris
doesn't over-commits swap) and swap is needed to support those
reservations. Also, some pages are dirtied early on and never touched
again; those pages should not be kept in memo
The discussion gets suddenly hot and interesting - albeit quite diverged
from the original topic ;)
First of all, as a disclaimer, when I have earlier proposed such changes
to datasets for swap (and maybe dump) use, I've explicitly proposed that
this be a new dataset type - compared to zvol and f
On 2013-01-22 23:32, Nico Williams wrote:
IIRC dump is special.
As for swap... really, you don't want to swap. If you're swapping you
have problems. Any swap space you have is to help you detect those
problems and correct them before apps start getting ENOMEM. There
*are* exceptions to this,
On 2013-01-22 23:03, Sašo Kiselkov wrote:
On 01/22/2013 10:45 PM, Jim Klimov wrote:
On 2013-01-22 14:29, Darren J Moffat wrote:
Preallocated ZVOLs - for swap/dump.
Or is it also supported to disable COW for such datasets, so that
the preallocated swap/dump zvols might remain contiguous on
On 2013-01-22 14:29, Darren J Moffat wrote:
Preallocated ZVOLs - for swap/dump.
Sounds like something I proposed on these lists, too ;)
Does this preallocation only mean filling an otherwise ordinary
ZVOL with zeroes (or some other pattern) - if so, to what effect?
Or is it also supported to d
On 2013-01-21 07:06, Stephan Budach wrote:
Are there switch stats on whether it has seen media errors?
Has anybody gotton QLogic's SanSurfer to work with anything newer than
Java 1.4.2? ;) I checked the logs on my switches and they don't seem to
indicate such issues, but I am lacking the real-ti
On 2013-01-20 17:16, Edward Harvey wrote:
But, by talking about it, we're just smoking pipe dreams. Cuz we all know zfs
is developmentally challenged now. But one can dream...
I beg to disagree. While most of my contribution was so far about
learning stuff and sharing with others, as well as
Did you try replacing the patch-cables and/or SFPs on the path
between servers and disks, or at least cleaning them? A speck
of dust (or, God forbid, a pixel of body fat from a fingerprint)
caught between the two optic cable cutoffs might cause any kind
of signal weirdness from time to time... and
On 2013-01-20 16:56, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
And regarding the "considerable activity" - AFAIK there is little way
for ZFS to reliabl
On 2013-01-20 19:55, Tomas Forsman wrote:
On 19 January, 2013 - Jim Klimov sent me these 2,0K bytes:
Hello all,
While revising my home NAS which had dedup enabled before I gathered
that its RAM capacity was too puny for the task, I found that there is
some deduplication among the data bits
On 2013-01-19 23:39, Richard Elling wrote:
This is not quite true for raidz. If there is a 4k write to a raidz
comprised of 4k sector disks, then
there will be one data and one parity block. There will not be 4 data +
1 parity with 75%
space wastage. Rather, the space allocation more closely rese
On 2013-01-19 20:23, Jim Klimov wrote:
On 2013-01-19 20:08, Bob Friesenhahn wrote:
On Sat, 19 Jan 2013, Jim Klimov wrote:
On 2013-01-19 18:17, Bob Friesenhahn wrote:
Resilver may in fact be just verifying that the pool disks are coherent
via metadata. This might happen if the fiber channel
On 2013-01-19 20:08, Bob Friesenhahn wrote:
On Sat, 19 Jan 2013, Jim Klimov wrote:
On 2013-01-19 18:17, Bob Friesenhahn wrote:
Resilver may in fact be just verifying that the pool disks are coherent
via metadata. This might happen if the fiber channel is flapping.
Correction: that
On 2013-01-19 18:17, Bob Friesenhahn wrote:
Resilver may in fact be just verifying that the pool disks are coherent
via metadata. This might happen if the fiber channel is flapping.
Correction: that (verification) would be scrubbing ;)
The way I get it, resilvering is related to scrubbing but
Hello all,
While revising my home NAS which had dedup enabled before I gathered
that its RAM capacity was too puny for the task, I found that there is
some deduplication among the data bits I uploaded there (makes sense,
since it holds backups of many of the computers I've worked on - some
of m
On 2013-01-18 06:35, Thomas Nau wrote:
If almost all of the I/Os are 4K, maybe your ZVOLs should use a volblocksize of
4K? This seems like the most obvious improvement.
4k might be a little small. 8k will have less metadata overhead. In some cases
we've seen good performance on these workload
held by
those older snapshots. Moving such temporary works to a different
dataset with a different snapshot schedule and/or to a different
pool (to keep related fragmentation constrained) may prove useful.
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zf
On 2013-01-17 16:04, Bob Friesenhahn wrote:
If almost all of the I/Os are 4K, maybe your ZVOLs should use a
volblocksize of 4K? This seems like the most obvious improvement.
Matching the volume block size to what the clients are actually using
(due to their filesystem configuration) should im
On 2013-01-10 08:51, Jason wrote:
Hi,
One of my server's zfs faulted and it shows following:
NAMESTATE READ WRITE CKSUM
backup UNAVAIL 0 0 0 insufficient replicas
raidz2-0 UNAVAIL 0 0 0 insufficient replicas
c4t0d0 O
ng over and
deleting the original) might help or not help free up a particular
TLVDEV (upon rewrite they will be striped again, albeit maybe ZFS
will make different decisions upon a new write - and prefer the more
free devices).
Also, if the file's blocks are referenced via snapshots, clones,
d
a pool on the same SXCE. There were no
such problems with newer build of parted as in OI, so that disk was
in fact labeled for SXCE while the box was booted with OI LiveCD.
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
On 2012-12-11 16:44, Jim Klimov wrote:
For single-break-per-row tests based on hypotheses from P parities,
D data disks and R broken rows, we need to checksum P*(D^R) userdata
recombinations in order to determine that we can't recover the block.
A small maths correction: the formula
On 2012-12-02 05:42, Jim Klimov wrote:
My plan is to dig out the needed sectors of the broken block from
each of the 6 disks and try any and all reasonable recombinations
of redundancy and data sectors to try and match the checksum - this
should be my definite answer on whether ZFS (of that
overheated CPU, non-ECC RAM and the software further along
the road). I am not sure which one of these *couldn't* issue
(or be interpreted to issue) a number of weird identical writes
to different disks at same offsets.
Everyone is a suspect :(
Thanks,
//Jim Klimov
___
more below...
On 2012-12-06 03:06, Jim Klimov wrote:
It also happens that on disks 1,2,3 the first row's sectors (d0, d2, d3)
are botched - ranges from 0x9C0 to 0xFFF (end of 4KB sector) are zeroes.
The neighboring blocks, located a few sectors away from this one, also
have compressed dat
ssion, but only
apply to raidzN.
AFAIK, the contents of userdata sectors and their ordering don't even
matter to ZFS layers until decompression - parities and checksums just
apply to prepared bulk data...
//Jim Klimov
On 2012-12-06 02:08, Jim Klimov wrote:
On 2012-12-05 05:52, Jim Kli
or or two worth
of data.
So, given that there are no on-disk errors in the "Dataset mos
[META], ID 0" "Object #0" - what does the zpool scrub find time
after time and call an "error in metadata:0x0"?
Thanks,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 2012-12-06 09:35, Albert Shih wrote:
1) add a 5th top-level vdev (eg. another set of 12 disks)
That's not a problem.
That IS a problem if you're going to ultimately remove an enclosure -
once added, you won't be able to remove the extra top-level VDEV from
your ZFS pool.
2) replace the
more below...
On 2012-12-05 23:16, Timothy Coalson wrote:
On Tue, Dec 4, 2012 at 10:52 PM, Jim Klimov mailto:jimkli...@cos.ru>> wrote:
On 2012-12-03 18:23, Jim Klimov wrote:
On 2012-12-02 05:42, Jim Klimov wrote:
>> 4) Where are the redundancy algorithms specifi
On 2012-12-05 05:52, Jim Klimov wrote:
For undersized allocations, i.e. of compressed data, it is possible
to see P-sizes not divisible by 4 (disks) in 4KB sectors, however,
some sectors do apparently get wasted because the A-size in the DVA
is divisible by 6*4KB. With columnar allocation of
On 2012-12-05 23:11, Morris Hooten wrote:
Is there a documented way or suggestion on how to migrate data from VXFS
to ZFS?
Off the top of my head, I think this would go like any other migration -
create the new pool on new disks and use rsync for simplicity (if your
VxFS setup does not utilize
e
sure to run after zpool imports/before zpool exports), but brave souls
can feel free to try it out and comment. Presence of the service didn't
cause any noticeable troubles on my test boxen over the past couple of
weeks.
http://vboxsvc.svn.sourceforge.net/viewvc/vboxsvc/lib/svc/method/zfs
On 2012-11-29 10:56, Jim Klimov wrote:
For example, I might want to have corporate webshop-related
databases and appservers to be the fastest storage citizens,
then some corporate CRM and email, then various lower priority
zones and VMs, and at the bottom of the list - backups.
On a side note
On 2012-12-05 04:11, Richard Elling wrote:
On Nov 29, 2012, at 1:56 AM, Jim Klimov mailto:jimkli...@cos.ru>> wrote:
I've heard a claim that ZFS relies too much on RAM caching, but
implements no sort of priorities (indeed, I've seen no knobs to
tune those) - so that if the stor
On 2012-12-03 18:23, Jim Klimov wrote:
On 2012-12-02 05:42, Jim Klimov wrote:
So... here are some applied questions:
Well, I am ready to reply a few of my own questions now :)
Continuing the desecration of my deceased files' resting grounds...
2) Do I understand correctly that fo
On 2012-12-03 20:51, Heiko L. wrote:
jimklimov wrote:
In general, I'd do the renaming with a "different bootable media",
including a LiveCD/LiveUSB, another distro that can import and
rename this pool version, etc. - as long as booting does not
involve use of the old rpool.
Thank you. I will t
On 2012-12-03 20:35, Heiko L. wrote:
I've already tested:
beadm create -p $dstpool $bename
beadm list
zpool set bootfs=$dstpool/ROOT/$bename $dstpool
beadm activate $bename
beadm list
init 6
- result:
root@opensolaris:~# init 6
updating //platform/i86pc/boot_archive
updating //platform/i86pc/amd
On 2012-12-02 05:42, Jim Klimov wrote:
So... here are some applied questions:
Well, I am ready to reply a few of my own questions now :)
I've staged an experiment by taking a 128Kb block from that file
and appending it to a new file in a test dataset, where I changed
the compression set
Hello all,
When I started with my old test box (the 6-disk raidz2 pool), I had
first created the pool on partitions (i.e. c7t1d0p0 or physical paths
like /pci@0,0/pci1043,81ec@1f,2/disk@1,0:q), but I've soon destroyed
it and recreated (with the same name "pool") in slices (i.e. c7t0d0s0
or /p
e the host's zfs volume which backs your old rpool
and use autoexpansion (or manual expansion) to let your VM's
rpool capture the whole increased virtual disk.
If automagic doesn't work, I posted about a month ago about
the manual procedure on this list:
http://mail.opensolaris.org/p
n yields the value saved in block pointer
and ZFS missed something, or if I don't get any such combo and ZFS
does what it should exhaustively and correctly, indeed ;)
Thanks a lot in advance for any info, ideas, insights,
and just for reading this long post to the end ;)
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e userdata and metadata (as is the default), while the randomly
accessed tablespaces might be or not be good candidates for such
caching - however you can test this setting change on the fly.
I believe, you must allow caching userdata for a dataset in RAM
if you want to let it spill over
On 2012-11-30 15:52, Tomas Forsman wrote:
On 30 November, 2012 - Albert Shih sent me these 0,8K bytes:
Hi all,
I would like to knwon if with ZFS it's possible to do something like that :
http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
Removing a disk - no, one still can not reduce
I've heard a claim that ZFS relies too much on RAM caching, but
implements no sort of priorities (indeed, I've seen no knobs to
tune those) - so that if the storage box receives many different
types of IO requests with different "administrative weights" in
the view of admins, it can not really thr
Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote:
There are very few situations where (gzip) option is better than the
default lzjb.
Well, for the most part my question regarded the slowness (or lack of)
gzip DEcompression as compared to lz* algorithms. If there are files
and data
at the zfs blocks for OS image files would be
the same and dedupable.
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the current JDK
installed in GZ: either simply lofs-mounted from GZ to LZs,
or in a separate dataset, cloned and delegated into LZs (if
JDK customizations are further needed by some - but not all -
local zones, i.e. timezone updates, trusted CA certs, etc.).
HTH,
//Jim Klimov
_
it is also possible that the block would go away (if it
is not referenced also by snapshots/clones/dedup), and such drastic
measures won't be needed.
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/m
On 2012-11-22 17:31, Darren J Moffat wrote:
Is it possible to use the ZFS Storage appliances in a similar
way, and fire up a Solaris zone (or a few) directly on the box
for general-purpose software; or to shell-script administrative
tasks such as the backup archive management in the global zone
(
?
Is it possible to run VirtualBoxes in the ZFS-SA OS, dare I ask? ;)
Thanks,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 2012-11-21 21:55, Ian Collins wrote:
I can't help thinking these drives would be overkill for an ARC device.
All of the expensive controller hardware is geared to boosting random
write IOPs, which somewhat wasted on a write slowly, read often device.
The enhancements would be good for a ZIL, b
e should double-check the found
discrepancies and those sectors it's going to use to recover a
block, at least of the kernel knows it is on non-ECC RAM (if it
does), but I don't know if it really does that. (Worthy RFE if not).
HTH,
//Jim Klimov
__
On 2012-11-21 03:21, nathan wrote:
Overall, the pain of the doubling of bandwidth requirements seems like a
big downer for *my* configuration, as I have just the one SSD, but I'll
persist and see what I can get out of it.
I might also speculate that for each rewritten block of userdata in
the V
On 2012-11-19 22:38, Mark Shellenbaum wrote:
The parent pointer is a single 64 bit quantity that can't track all the
possible parents a hard linked file could have.
I believe it is inode number of the parent, or similar to that - and
an available inode number can get recycled and used by newer
Oh, and one more thing: rsync is only good if your filesystems don't
really rely on ZFS/NFSv4-style ACLs. If you need those, you are stuck
with Solaris tar or Solaris cpio to carry the files over, or you have
to script up replication of ACLs after rsync somehow.
You should also replicate the "loc
On 2012-11-19 20:58, Mark Shellenbaum wrote:
There is probably nothing wrong with the snapshots. This is a bug in
ZFS diff. The ZPL parent pointer is only guaranteed to be correct for
directory objects. What you probably have is a file that was hard
linked multiple times and the parent pointer
mes, if you do the plain rsync from each snapdir.
Perhaps, if the "zfs diff" does perform reasonably for you, you can
feed its output as the list of objects to replicate in rsync's input
and save many cycles this way.
Good luck,
//Jim Klimov
_
On 2012-11-16 14:45, Jim Klimov wrote:
Well, as a simple stone-age solution (to simplify your SMF approach),
you can define custom attributes on dataset, zvols included. I think
a custom attr must include a colon ":" in the name, and values can be
multiline if needed. Simple examp
Well, as a simple stone-age solution (to simplify your SMF approach),
you can define custom attributes on dataset, zvols included. I think
a custom attr must include a colon ":" in the name, and values can be
multiline if needed. Simple example follows:
# zfs set owner:user=jim pool/rsvd
# zfs se
On 2012-11-16 12:43, Robert Milkowski wrote:
No, there isn’t other way to do it currently. SMF approach is probably
the best option for the time being.
I think that there should be couple of other properties for zvol where
permissions could be stated.
+1 :)
Well, when the subject was discussed
On 2012-11-15 21:43, Geoff Nordli wrote:
Instead of using vdi, I use comstar targets and then use vbox built-in
scsi initiator.
Out of curiosity: in this case are there any devices whose ownership
might get similarly botched, or you've tested that this approach also
works well for non-root VMs?
On 2012-11-14 18:05, Eric D. Mudama wrote:
On Wed, Nov 14 at 0:28, Jim Klimov wrote:
All in all, I can't come up with anything offensive against it quickly
;) One possible nit regards the ratings being geared towards 4KB block
(which is not unusual with SSDs), so it may be further
On 2012-11-14 03:20, Dan Swartzendruber wrote:
Well, I think I give up for now. I spent quite a few hours over the last
couple of days trying to get gnome desktop working on bare-metal OI,
followed by virtualbox. Supposedly that works in headless mode with RDP for
management, but nothing but fa
On 2012-11-13 22:56, Mauricio Tavares wrote:
Trying again:
Intel just released those drives. Any thoughts on how nicely they will
play in a zfs/hardware raid setup?
Seems interesting - fast, assumed reliable and consistent in its IOPS
(according to marketing talk), addresses power loss reliabi
20Gb in size (dunno why - sol10u10 bug?) :(
So I do the manual step:
# zpool online -e pool c1t1d0
The "-e" flag marks the component as eligible for expansion.
When all pieces of a top-level vdev become larger, the setting
takes effect and the pool finally beco
process partial
overwrites of a 4KB sector with 512b pieces of data - would other
bytes remain intact or not?..
Before trying to fool a production system this way, if at all,
I believe some stress-tests with small blocks are due on some
other system.
My 2c,
//Jim Klimov
On 2012-11-09 18:06, Gregg Wonderly wrote:
> Do you move the pools between machines, or just on the same physical
machine? Could you just use symlinks from the new root to the old root
so that the names work until you can reboot? It might be more practical
to always use symlinks if you do a l
nment or even into a livecd/failsafe,
just so that the needed datasets or paths won't be "busy" and so I
can set, verify and apply these mountpoint values. This is not a
convenient way to do things :)
Thanks,
//Jim Klimov
___
z
ion):
http://vboxsvc.svn.sourceforge.net/viewvc/vboxsvc/lib/svc/method/vbox.sh
http://vboxsvc.svn.sourceforge.net/viewvc/vboxsvc/var/svc/manifest/site/vbox-svc.xml
http://vboxsvc.svn.sourceforge.net/viewvc/vboxsvc/usr/share/doc/vboxsvc/README-vboxsvc.txt
See you in the VirtualBox forum t
On 2012-11-09 16:14, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: Karl Wagner [mailto:k...@mouse-hole.com]
If I was doing this now, I would probably use the ZFS aware OS bare metal,
but I still think I would use iSCSI to export the ZVols (mainly due to the
ability
to us
her distros and backups.
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 552 matches
Mail list logo