into which I can plug my
recordsize and volume size to get the appropriate numbers?
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært imperati
I doubt it. ZFS is meant to be used for large systems, in which memory is not
an issue
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært
. As zdb is an intentionally unsupported tool, methinks
> recompile
> may be required (or write your own).
I guess this tool might not work too well, then, with 20TiB in 47M files?
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
can use for a general overview? Say I want 125TB space and I want to
dedup that for backup use. It'll probably be quite efficient dedup, so long
alignment will match. By the way, is there a way to auto-align data for dedup
in case of backup? Or does zfs do this by itself?
Best regards
roy
--
Hi all
>From http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide I
>read
"Avoid creating a RAIDZ, RAIDZ-2, RAIDZ-3, or a mirrored configuration with one
logical device of 40+ devices. See the sections below for examples of redundant
configurations."
What do they mean by th
- "Edward Ned Harvey" skrev:
> > What build were you running? The should have been addressed by
> > CR6844090
> > that went into build 117.
>
> I'm running solaris, but that's irrelevant. The storagetek array
> controller
> itself reports the new disk as infinitesimally smaller than the one
if you're going to
> create
Seems like a clumsy workaround for a hardware problem. It will also disable the
drives' cache, which is not a good idea. Why not just get a new drive?
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.
- "Kyle McDonald" skrev:
> I've seen the Nexenta and EON webpages, but I'm not looking to build
> my own.
>
> Is there anything out there I can just buy?
I've setup a few systems with supermicro hardware - works well and doesn't cost
a wh
Hi all
Is it possible to securely delete a file from a zfs dataset/zpool once it's
been snapshotted, meaning "delete (and perhaps overwrite) all copies of this
file"?
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I al
I guess that's the way I thought it was. Perhaps it would be nice to add such a
feature? If something gets stuck in a truckload of snapshots, say a 40GB file
in the root fs, it'd be nice to just rm --killemall largefile
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@kar
r...@urd:~# zfs get casesensitivity dpool/test
NAMEPROPERTY VALUESOURCE
dpool/test casesensitivity sensitive-
this seems to be settable only by create, not later. See man zfs for more info
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
>From wikipedia, PCI is
133 MB /s (32-bit at 33 MHz) 266 MB/s (32-bit at 66 MHz or 64-bit at 33 MHz)
533 MB/s (64-bit at 66 MHz)
Not quite the 3GB/s hoped for. But how fast do drives themselves tend to be? I
rarely see above 80-100MB/s, although my drives are just consumer-level 7200RPM
- "Harry Putnam" skrev:
> Erik Trimble writes:
>
> >> Do you think it would be a problem having a second sata card in a
> PCI
> >> slot? That would be 8 sata ports in all, since the A-open AK86
> >> motherboard has 2 built in. Or should I swap out the 2prt for the
> 4
> >> prt. I really
Hi all
I've been playing a little with dedup, and it seems it needs a truckload of
memory, something I don't have on my test systems. Does anyone have performance
data for large (20TB+) systems with dedup?
roy
___
zfs-discuss mailing list
zfs-discuss@
ZFS first does a scan of indicies and such, which requires lots of seeks. After
that, the resilvering starts. I guess if you give it an hour, it'll be done
roy
- "Leandro Vanden Bosch" skrev:
Hello everyone,
As one of the steps of improving my ZFS home fileserver (snv_134) I wanted t
vs RaidZ3 - 1disk.
A degraded raidz2 (minus one disk) will offer the same redundancy as raidz1
would, and the same numbers will apply to raidz3 vs raidz2. One of the good
reasons of using raidz2 or even raidz3 is the chance of sector failure during
the eventual resilver.
Best regards
roy
--
Roy
- "Dave Pooser" skrev:
> I'm building another 24-bay rackmount storage server, and I'm
> considering
> what drives to put in the bays. My chassis is a Supermicro SC846A, so
> the
> backplane supports SAS or SATA; my controllers are LSI3081E, again
> supporting SAS or SATA.
>
> Looking at dri
- "Tonmaus" skrev:
> I wonder if this is the right place to ask, as the Filesystem in User
> Space implementation is a separate project. In Solaris ZFS runs in
> kernel. FUSE implementations are slow, no doubt. Same goes for other
> FUSE implementations, such as for NTFS.
The classic answers
- "Brandon High" skrev:
> SAS drives are generally intended to be used in a multi-drive / RAID
> environment, and are delivered with TLER / CCTL / ERC enabled to
> prevent them from falling out of arrays when they hit a read error.
>
> SAS drives will generally have a longer warranty than de
- "Neil Simpson" skrev:
> I'm pretty sure Solaris 10 update 9 will have zpool version 22 so WILL
> have dedup.
Interesting - from where do you have this information?
roy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolar
> This really depends on if you are willing to pay in advance, or pay
> after the failure. Even with redundancy, the cost of a failure may be
>
> high due to loss of array performance and system administration time.
>
> Array performance may go into the toilet during resilvers, depending
> on
- "Daniel Carosone" skrev:
> On Mon, Apr 26, 2010 at 10:02:42AM -0700, Chris Du wrote:
> > SAS: full duplex
> > SATA: half duplex
> >
> > SAS: dual port
> > SATA: single port (some enterprise SATA has dual port)
> >
> > SAS: 2 active channel - 2 concurrent write, or 2 read, or 1 write
> and
- "Tim.Kreis" skrev:
> The problem is that the windows server backup seems to choose dynamic
>
> vhd (which would make sense in most cases) and I dont know if there is
> a
> way to change that. Using ISCSI-volumes wont help in my case since
> servers are running on physical hardware.
It s
scrub's priority somehow?
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I got this hint from Richard Elling, but haven't had time to test it much.
Perhaps someone else could help?
roy
> Interesting. If you'd like to experiment, you can change the limit of the
> number of scrub I/Os queued to each vdev. The default is 10, but that
> is too close to the normal lim
of data to somewhere to find it's twice as
big as reported by du.
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært imperativ for alle pe
> On Thu, 29 Apr 2010, Tonmaus wrote:
>
> > Recommending to not using scrub doesn't even qualify as a
> > workaround, in my regard.
>
> As a devoted believer in the power of scrub, I believe that after the
>
> OS, power supplies, and controller have been verified to function with
> a good scrub
- "Cindy Swearingen" skrev:
> Brandon,
>
> You're probably hitting this CR:
>
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924824
Interesting - reported in february and still no fix?
roy
___
zfs-discuss mailing list
zfs-discuss@
- "Jan Riechers" skrev:
I am using a mirrored system pool on 2 80G drives - however I was only using
40G since I thought I might use the rest for something else. ZFS Time Slider
was complaining the pool was filled for 90% and I decided to increase pool
size.
What I did was a zpool detac
- "Richard L. Hamilton" skrev:
> One can rename a zpool on import
>
> zpool import -f pool_or_id newname
>
> Is there any way to rename it (back again, perhaps)
> on export?
>
> (I had to rename rpool in an old disk image to access
> some stuff in it, and I'd like to put it back the way it
44d0ba0
zfs:spa_sync+3a9 ()
May 2 17:42:09 mime genunix: [ID 655072 kern.notice] ff00044d0c40
zfs:txg_sync_thread+24a ()
May 2 17:42:09 mime genunix: [ID 655072 kern.notice] ff00044d0c50
unix:thread_start+8 ()
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
h
> I am currently using OpenSolaris 2009.06
> If I was to upgrade to the current "developer" version, forgive my
> ignorance
> (since I am new to *solaris), but how would I do this?
# pkg set-publisher -O http://pkg.opensolaris.org/dev opensolaris.org
# pkg image-update
That'll take you to snv_134
(consumer) 2TB
drives, but that's your choice :)
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært imperativ for alle pedagoger å u
- "Roy Sigurd Karlsbakk" skrev:
> Hi all
>
> I have a test system with snv134 and 8x2TB drives in RAIDz2 and
> currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on
> the testpool drops to something hardly usable while scrubbing the
> pool.
>
existing "gpool" so that I have
> a RAIDZ of 6x drives.
>
> Any guidance on how to do it? I tried to do zfs snapshot
You can't boot off raidz. That's for data only. Get a couple of cheap drives or
SSDs for the root and use the large drives for data
Vennlige h
Hi
Does this mean exporting and re-importing a rpool break things? I have tried
exporting and re-importing other pools with new names and yet haven't seen
problems with it
roy
- "Cindy Swearingen" skrev:
> Hi Richard,
>
> Renaming the root pool is not recommended. I have some details on
ways good to be sure ...
This is the case with most OSes now. Swap out stuff early, perhaps keep it in
RAM and swap at the same time, and the kernel can choose what to do later. In
Linux you can set it in /proc/sys/vm/swappiness.
Anyone that knows how this is tuned in osol, btw?
Bes
- "Giovanni" skrev:
> Hi,
>
> Were you ever able to solve this problem on your AOC-SAT2-MV8 card? I
> am in need of purchasing it to add more drives to my server.
>
What problem was this? I have two servers with these cards and the work well
Best regards
roy
--
tools exist that can do the
same?
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av
- "Michael Schuster" skrev:
> On 10.05.10 08:57, Roy Sigurd Karlsbakk wrote:
> > Hi all
> >
> > It seems that if using zfs, the usual tools like vmstat, sar, top
> etc are quite worthless, since zfs i/o load is not reported as iowait
> etc. Are there any
all to be individual
> ZFS file systems, but there seems to be a bug/limitation due to the
> prohibitive creation time.
Is there a chance of you running out of memory here? If ZFS runs out of memory,
it'll read indicies from disk instead of keeping them in memory, something that
can al
- "Roy Sigurd Karlsbakk" skrev:
> - "charles" skrev:
>
> > Hi,
> >
> > This thread refers to Solaris 10, but it was suggested that I post
> it
> > here as ZFS developers may well be more likely to respond.
> >
> &g
of memory (or l2arc) will be required per terabyte for the DDT,
which is quite a lot...
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært imperativ f
- "Jim Horng" skrev:
> zfs send tank/export/projects/project1...@today | zfs receive -d
> mpool
Perhaps zfs send -R is what you're looking for...
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det
Product/Product.aspx?Item=N82E16813182230
I'd hate to buy it and find out it doesn't work.
AFAIK this is just yet another Opteron, only with a bunch more of cores ...
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogi
te cache?
Compared to what you have now, I gues anything will do. It doesn't need to be
500 gigs, just use something >= that size, preferably larger, in case the 500
gigs turns out to be 499.
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsba
Hi all
I've been reading a little, and it seems using WD Green drives isn't very
popular in here. Can someone explain why these are so much worse than others?
Usually I see drives go bad with more or less the same frequency...
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47)
ays to scrub it. Does anyone have scrub times for
similar setups with, say, Black drives?
> (3) They seem to have a lot of platters.. 3 or 4. More platters ==
> more heat == more failure... apparently.
I somehow doubt Black drives have less platters
Best regards
roy
--
Roy Sigurd Karl
ly archive data, no reads exceeding the ethernet
bandwidth
Vennlige hilsener
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært imperativ for alle pedagoger å
ms with this in later versions, but I don't have the links
handy - google for it :)
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært imperativ
d inner, although space-wise, the curve is not linear,
since most of the data is stored in the outer parts (due to more sectors per
cylinder there).
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pe
0 0
c11d0s1ONLINE 0 0 0
spares
c9t7d0 AVAIL
--
Vennlige hilsener
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
it seems fdisk
partitions aren't automatically recognized if moving a port, while slices are.
I think you should change that to slices.
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presentere
er today about the choice of
SAS/SATA controllers. Most will do in a home server environment, though.
AOC-SAT2-MV8 are great controllers, but run on PCI-X, which isn't very
compatible with PCI Express
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.
for ZFS all involve serial consoles, which did not work, and takes a
lot of time debugging).
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært i
hough I've from what I've got on this list)
on how much memory one should have for deduping an xTiB dataset.
Does anyone know how the status is for dedup now? In 134 it doesn't work very
well, but is it better in ON140 etc?
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 9754268
- "Erik Trimble" skrev:
> Roy Sigurd Karlsbakk wrote:
> > Hi all
> >
> > I've been doing a lot of testing with dedup and concluded it's not
> really ready for production. If something fails, it can render the
> pool unuseless for hours or maybe
. Some seems to be fixed in 135,
and it was said here on the list that all known bugs should be fixed before the
next release (see my thread 'dedup status')
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt
- "Roy Sigurd Karlsbakk" skrev:
> - "John Balestrini" skrev:
>
> > Yep. Dedup is on. A zpool list shows a 1.50x dedup ratio. I was
> > imagining that the large ratio was tied to that particular snapshot.
> >
> > basie@/root# zpool list
.0, then this number will scale linearly with
> size.
> If the dedup rate > 1.0, then this number will not scale linearly, it
> will be
> less. So you can use the linear scale as a worst-case approximation.
How large was this filesystem?
Are there any good ways of planning memory or SS
ey on l2arc, since it _will_ require either massive amounts
of RAM or some good SSDs for l2arc.
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært im
er, osol won't use more than top half of memory size for zil, so you can
probably slice it up and use the rest for l2arc.
Vennlige hilsener
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presen
It recommends that for every TB of storage you have you want 1GB of
> RAM just for the metadata.
That's for dedup, 150 bytes per block, meaning approx 1GB per 1TB if all (or
most) are 128kB blocks, and way more memory (or L2ARC) if you have small files.
Vennlige hilsener / Best regards
ro
=on and detach the 7,5GB drive and try to attach another 7,5GB
drive, I don't think it'll work very well. Again, I'd use slices. I think,
though, osol disables write cache if working with slices, and not drives. A
also think this can be forced to on, but I don't know how
hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av
idiomer med fremmed
want the data or zfs send/receive if you also want the
snapshots etc). This might take a while with bad sectors and disk timeouts, but
you'll get (most of?) your data moved over without much hassle.
Just my two c.
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 9754
on a UPS while the server is, so it's not an issue. :)
Disabling ZIL is, according to ZFS best practice, NOT recommended. Get some SSD
for the Zil instead, preferably mirrored. You won't need a lot, ZIL never uses
more than half the RAM size
Vennlige hilsener / Best regards
roy
--
Roy S
lsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av
idiomer med fremmed opprinne
.9 4 37 0 0
> 0 0 c4t17d0
Seems to me c9d1 is having a hard time. How is your zpool layout?
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelli
Hi all
Since Windows 2003 Server or so, it has had some versioning support usable from
the client side if checking the properties on a file. Is it somehow possible to
use this functionality with ZFS snapshots?
--
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r
each a zfs filesystem, each shared with the proper
sharenfs permissions.
Did I miss a browse or traverse option somewhere?
is mydir2 on a separate filesystem/dataset?
--
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.n
iB or even
bytes?
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av
i
- "Brandon High" skrev:
> On Sun, May 30, 2010 at 11:46 AM, Roy Sigurd Karlsbakk
> wrote:
> > Is there a way to report zpool/zfs stats in a fixed scale, like KiB
> or even bytes?
>
> Some (but not all) commands use -p.
> -p
>
- "Andreas Grüninger" skrev:
> Use
>
> zfs get -Hp used pool1/nfs1
>
> to get a parsable output.
r...@mime:~$ zfs get -Hp testpool
bad property list: invalid property 'testpool'
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+4
or the index to stay in RAM. 64k
blocks, the double, et cetera...
l2arc will help a lot if memory is low
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intell
f painful to digest.
ZFS guarantees consistency in a redundant setup, but it looks like your pool
only consists of one drive, meaning zero redundancy
> 3. The action says "Determine if the device needs to be replaced". How
> the heck do I do that?
attach nother drive, mirror, detat
h only 128k blocks
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært imperativ for alle pedagoger å unngå eksessiv anven
- "Brandon High" skrev:
> On Sun, Jun 6, 2010 at 3:26 AM, Roy Sigurd Karlsbakk
> wrote:
> > - "Brandon High" skrev:
> >> Decreasing the block size increases the size of the dedup table
> >> (DDT).
> >> Every entry in the DD
er is located some 50km
from home, so I need something that works, not part of the time, but all the
time. Due to this, I'd recommend against VirtualBox on OpenSolaris.
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.kar
018.html)
> "Around 270 bytes, or one 512 byte sector."
I guess this then means you'll need to change the 1GiB per 1TiB deduplicated to
2GiB per 1TiB and way more with smaller blocks...
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk
would need more RAM in this setup...
>
> Ray
With 4k recordsize, you won't have enough memory slots for the memory. Grab a
few X25-Ms or something to do the buffering
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
what you are looking for.
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse
ssion, dedup etc, although I don't
recommend dedup as of 134), restore the data.
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært imp
top-posts and similar bottom-posts where everything in
the thread is kept. This is not good netiquette, even in 2010.
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenter
there some other way to
create a solid clone, particularly with a machine that won't have the same
drive configuration?
read up about zfs send/receive :)
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
Hi all
Crucial RealSSD C300 has been released and showing good numbers for use as Zil
and L2ARC. Does anyone know if this unit flushes its cache on request, as
opposed to Intel units etc?
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http
bsolute maximum value is the size of
> main memory. Is this correct?
ZFS uses at most RAM/2 for ZIL
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelli
- Original Message -
> On Mon, 14 Jun 2010, Roy Sigurd Karlsbakk wrote:
>
> >> There is absolutely no sense in having slog devices larger than
> >> then main memory, because it will never be used, right?
> >> ZFS will rather flush the txg to disk than rea
ing is not
done if there are any slogs because I found it didn't perform as well. Probably
ought to be re-evaluated.
Won't this affect NFS/iSCSI performance pretty badly where the ZIL is crucial?
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@
developers? Will it be addressed?
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært imperativ for alle pedagoger å unngå eksessiv
ase just check sizeof that struct?
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært imperativ for alle pedagoger å unngå ekses
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av
idiomer me
ant to look into glusterfs if you want a redundant storage system
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært imper
L2ARC
(shared with SLOG, we just use some slices for that).
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært imperativ f
- Original Message -
> On Jun 20, 2010, at 11:55, Roy Sigurd Karlsbakk wrote:
>
> > There will also be a few common areas for each department and
> > perhaps a backup area.
>
> The back up area should be on a different set of disks.
>
> IMHO, a back up i
ase tell :)
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av
idi
- Original Message -
> On Jun 21, 2010, at 05:00, Roy Sigurd Karlsbakk wrote:
>
> > So far the plan is to keep it in one pool for design and
> > administration simplicity. Why would you want to split up (net) 40TB
> > into more pools? Seems to me that'll me
- Original Message -
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
> >
> > Close to 1TB SSD cache will also help to boost read
> > speeds,
>
> Remember, this will not
doubt about the effect.
Does anyone know if this will help gaining performance? Or will it be bad?
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. D
k in there.
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av
1 - 100 of 368 matches
Mail list logo