..
What does one do for power? What are the power requirements when the
system is first powered on? Can drive spin-up be staggered between
JBOD chassis? Does the server need to be powered up last so that it
does not time out on the zfs import?
Bob
--
Bob Friesenhahn
bfrie
on OpenIndiana and it should be able to work
without VT extensions.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs
used to exhibit this problem so I opened Illumos issue 2998
(https://www.illumos.org/issues/2998). The weird thing is that the
problem went away and has not returned.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
subset of the total files are updated (at least on
my systems) so the caching requirements are small. Files updated on
one day are more likely to be the ones updated on subsequent days.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
many types of systems and filesystems.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
).
That is what I used to do before I learned better.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
/bfriesen/zfs-discuss/zfs-cache-test.ksh;.
The script will exercise an initial uncached read from disks, and then
a (hopefully) cached re-read from disks. I think that it serves as a
useful benchmark.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
running on illumos-based
distributions.
Even FreeBSD's zfs is now based on zfs from Illumos. FreeBSD and
Linux zfs developers contribute fixes back to zfs in Illumos.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
:
I am finding that rsync with the right options (to directly
block-overwrite) plus zfs snapshots is providing me with pretty
amazing deduplication for backups without even enabling
deduplication in zfs. Now backup storage goes a very long way.
Bob
--
Bob Friesenhahn
bfrie
than
writing to a new temporary file first. As a result, zfs COW produces
primitive deduplication of at least the unchanged blocks (by writing
nothing) while writing new COW blocks for the changed blocks.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users
and then renaming them.
You have this reversed. The older data is served from fewer spindles
than data written after the new vdev is added. Performance with the
newer data should be improved.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
their
discussion forums, which are web-based and virtually dead.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
been some cases where people said unfavorable things about Oracle on
this list. Oracle needs to control its message and the principle form
of communication will be via private support calls authorized by
service contracts and authorized corporate publications.
Bob
--
Bob Friesenhahn
bfrie
improvements.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
/adm/messages which might indicate
when and how FC connectivity has been lost?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss
On Sat, 19 Jan 2013, Jim Klimov wrote:
On 2013-01-19 18:17, Bob Friesenhahn wrote:
Resilver may in fact be just verifying that the pool disks are coherent
via metadata. This might happen if the fiber channel is flapping.
Correction: that (verification) would be scrubbing ;)
I don't think
,
or is only one switch used?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
to their filesystem configuration) should improve performance
during normal operations and should reduce the number of blocks which
need to be sent in the backup by reducing write amplification due to
overlap blocks..
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users
. If this is going continuously, then it may be causing
more fragmentation in conjunction with your snapshots.
See http://www.brendangregg.com/dtrace.html;.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
On Thu, 17 Jan 2013, Bob Friesenhahn wrote:
For NFS you should disable atime on the NFS client mounts.
This advice was wrong. It needs to be done on the server side.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
further?
Do some filesystems contain many snapshots? Do some filesystems use
small zfs block sizes. Have the servers been used the same?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
on
Illumos or OpenIndiana mailing lists and I don't recall seeing this
issue in the bug trackers.
Illumos is not so good at dealing with huge memory systems but
perhaps it is also more stable as well.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
experiment by booting
from the live CD and seeing if your disks show up.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
and without rebooting. It is possible that my recollection
is wrong though. If my recollection is correct, then it is not so
important to know what is good enough before starting to put your
database in service.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org
chassis and use 'zfs
send' to send a full snapshot of each filesystem to the new pool.
After the bulk of the data has been transferred, take new snapshots
and send the remainder. This expects that both pools can be available
at once.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
is effective because problems just go away once
enough resources are available.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss
would
work if they support the standard AHCI interface. I would not take
any chance with unknown SAS.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
unlikely.
I purchased an eSATA card (from SIIG, http://www.siig.com/) with the
intention to try it with Solaris 10 to see if it would work but have
not tried plugging it in yet.
It seems likely that a numer of cheap eSATA cards may work.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us
On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
On 10/25/2012 04:09 PM, Bob Friesenhahn wrote:
On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
Look for Dell's 6Gbps SAS HBA cards. They can be had new for $100 and
are essentially rebranded LSI 9200-8e cards. Always try to look for OEM
cards with LSI
in any snapshot.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
a
minimum of 4k. There might be more space consumed by the metadata
than the actual data.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
be able to get 12k no problem. We are running
NFS in a heavily used environment with millions of very small files,
so low latency counts.
Your test method is not valid.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
.
Quite a lot of product would need to be sold in order to pay for both
re-engineering and the cost of running a business.
Regardless, continual product re-development is necessary or else it
will surely die.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users
DDRDrive product still supported and moving? Is it well
supported for Illumos?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss
backup can be a whole lot faster
and still satisfy many users.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs
.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
more data if there is a power
failure.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
to not come up immediately, or be slow to come up when
recovering from a power failure.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs
the
copies feature should be pretty effective.
Would the use of several copies cripple the write speeds?
It would reduce the write rate by 1/2 or by whatever number of copies
you have requested.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
should
not improve the pool storage layout because the pool already had a
ZIL.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss
.
Verify that the zfs checksum algorithm you are using is a low-cost one
and that you have not enabled compression or deduplication.
You did not tell us how your zfs pool is organized so it is impossible
to comment more.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
will work with drives with 4k
sectors so Solaris 10 users will not be stuck.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss
it. The closest
equivalent in a POSIX filesystem would be if a previously-null block
in a sparse file is updated to hold content.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
a multiple of 64k.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
developed for the product. It would no longer be an
appliance.
No doubt, Nexenta has developed new cool stuff for NexentaStor.
As others have said, only Oracle is capable of supporting the system
as the original product. It could be re-installed to become something
else.
Bob
--
Bob Friesenhahn
, etc., are used.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
out
262144 bytes (2.6 GB) copied, 0.379147 s, 6.9 GB/s
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs
@0,0/pci8086,340a@3/pci1000,3140@0 (mpt2):
Jul 2 13:06:40 storage Disconnected command timeout for Target 5
Any ideas? Could help me?
--
Roberto Scudeller
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
sata disks?
Unfortunately, I already put my pool into use and can not conveniently
destroy it now.
The disks I am using are SAS (7200 RPM, 1 GB) but return similar
per-disk data rates as the SATA disks I use for the boot pool.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
408114 403473 761683 766615
268435456 64 418910 55239 768042 768498
268435456 128 408990 399732 763279 766882
268435456 256 413919 399386 760800 764468
268435456 512 410246 403019 766627 768739
Bob
--
Bob Friesenhahn
bfrie
disks are on expanders. There have also been reports that
one failing disk can cause problems when on expanders. Regardless, if
this system has been previously operating fine for some time, these
errors would indicate a change in the hardware shared by all these
devices.
Bob
--
Bob Friesenhahn
for reads from mirrors to be faster than for a single
disk because reads can be scheduled from either disk, with different
I/Os being handled in parallel.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
has requested it.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
. If not that,
then there must be a bottleneck in your hardware somewhere.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss
(containing random
data such as returned from /dev/urandom) to a zfs filesystem, unmount
the filesystem, remount the filesystem, and then time how long it
takes to read the file once. The reason why this works is because
remounting the filesystem restarts the filesystem cache.
Bob
--
Bob
accelleration for some type of standard
encryption, then that needs to be considered. The CPU might not need
to do the encryption the hard way.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
the magic
data block before some other known block (which produces the same
hash) is written. This allows one block to substitute for another.
It does seem that security is important because with a human element,
data is not necessarily random.
Bob
--
Bob Friesenhahn
bfrie
On Wed, 11 Jul 2012, Joerg Schilling wrote:
Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:
On Tue, 10 Jul 2012, Edward Ned Harvey wrote:
CPU's are not getting much faster. But IO is definitely getting faster. It's
best to keep ahead of that curve.
It seems that per-socket CPU
and compute a collision block.
For example, the well-known block might be part of a Windows
anti-virus package, or a Windows firewall configuration, and
corrupting it might leave a Windows VM open to malware attack.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org
a future,
particularly once it frees itself from all Sun-derived binary
components.
Oracle continues with Solaris 11 and does seem to be funding necessary
driver and platform support. User access to Solaris 11 may be
abitrarily limited.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us
algorithm needs to assure that. Having an excellent random
distribution property is not sufficient if it is relatively easy to
compute some other block producing the same hash. It may be useful to
compromise a known block even if the compromized result is complete
garbage.
Bob
--
Bob Friesenhahn
. Oracle recinded (or lost) the special Studio releases
needed to build the OpenSolaris kernel. The only way I can see to
obtain these releases is illegally.
However, Studio 12.3 (free download) produces user-space executables
which run fine under Illumos.
Bob
--
Bob Friesenhahn
bfrie
designed to perform copyright violations.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
page).
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
On Tue, 3 Jul 2012, James Litchfield wrote:
Agreed - msync/munmap is the only guarantee.
I don't see that the munmap definition assures that anything is
written to disk. The system is free to buffer the data in RAM as
long as it likes without writing anything at all.
Bob
--
Bob
on the mapping with the MS_SYNC option.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
will
keep looking since sooner or later they will provide it.
I browsed the site and saw many 6GBit enclosures. I also saw one with
Nexenta (Solaris/zfs appliance) inside.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
an end of the road to me.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
, even if I had to drop to 3TB density.
Why would you want native 4k drives right now? Not much would work
with such drives.
Maybe in a dedicated chassis (e.g. the JBOD) they could be of some
use.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users
given there
is no shortage of physical ram ?
Absent pressure for memory, no longer referenced pages will stay in
memory forever. They can then be re-referenced in memory.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
for the specific disk would likely hasten progress.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
and the applications.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
layout would result in better
performance.
It seems safest to upgrade the OS before moving a lot of data. Leave
a fallback path in case the OS upgrade does not work as expected.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
ones).
I like this idea since it allows running two complete pools on the
same disks without using files. Due to using partitions, the disk
write cache will be disabled unless you specifically enable it.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users
. There is not initially additional risk
due to raidz1 in the pool since the drives will be about as full as
before.
I am not sure what additional risks are involved due to using files.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
if a snapshot was taken. What sort of zfs is being used here?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs
to test raidz
(http://www.simplesystems.org/users/bfriesen/zfs-discuss/2540-zfs-performance.pdf).
Most common benchmarking is sequential read/write and rarely
read-file/write-file where 'file' is a megabyte or two and the file is
different for each iteration.
Bob
--
Bob Friesenhahn
bfrie
that deduplication was not enabled for this pool? This is
the sort of behavior that one might expect if deduplication was
enabled without enough RAM or L2 read cache.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
advantages obtained from simple
mirroring (duplex mirroring) with zfs.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing
On Fri, 4 May 2012, Rocky Shek wrote:
If I were you, I will not use 9240-8I.
I will use 9211-8I as pure HBA with IT FW for ZFS.
Is there IT FW for the 9240-8i?
They seem to use the same SAS chipset.
My next system will have 9211-8i with IT FW. Playing it safe. Good
enough for Nexenta
like it is short on memory only tests how the system will
behave when it is short on memory.
Testing multi-threaded synchronous writes with IOzone might actually
mean something if it is representative of your work-load.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
/projects/filebench/;.
Zfs is all about caching so the cache really does need to be included
(and not intentionally broken) in any realistic measurement of how the
system will behave.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
to be failing at once.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
://www.youtube.com/user/deirdres; and elsewhere on Youtube.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
by
dynamic load-balancing. That is what I did for my storage here, but
the preferences needed to be configured on the remote end.
It is likely possible to configure everything on the host end but
Solaris has special support for my drive array so it used the drive
array's preferences.
Bob
--
Bob
to posting. :-(
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
before there was anything like
SEEK_HOLE.
If file space usage is less than file directory size then it must
contain a hole. Even for compressed files, I am pretty sure that
Solaris reports the uncompressed space usage.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
be a chicken-and-egg problem
since Oracle might not want to answer speculative questions but might
be more concrete if you have a system in hand.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
purchasable except for on the used market. Obtaining an approved
system seems very difficult. In spite of this, Solaris runs very well
on many non-approved modern systems.
I don't know what that means as far as the ability to purchase Solaris
support.
Bob
--
Bob Friesenhahn
bfrie
of front-end servers and put the larger
files on fewer storage servers because they are requested much less
often and stream out better. This would mean that those front-end
thumbnail servers would primarily contain small files.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
not even be an option
for you.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
further?
It is difficult to say if you should be worried.
Be sure to do 'iostat -xe' to see if there are any accumulating errors
related to the disk.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
be tested with zfs commands without
physically moving/removing drives or endangering your data.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
down may in fact be a more significant factor than a JBOD being
down.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing
specified to handle 105.
My own equipment typically experiences up to 83 degrees during the
peak of summer (but quite a lot more if the AC fails).
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
1 - 100 of 1392 matches
Mail list logo