On Jun 11, 2006, at 03:21, can you guess? wrote:
My dim recollection is that TOPS-10 implemented its popular (but
again 100%) undelete mechanism using the same kind of 'space-
available' approach suggested here. It did, however, support
explicit 'delete - I really mean it' facilities to
On Jul 9, 2006, at 01:05, Richard Elling wrote:
Also this week, I noticed that the HCL is falling behind. There
are many
systems which will work that aren't listed. The problem is that a
motherboard
design has about a 6 month market window. Rather than worry about
specific
boards, look
On Jul 15, 2006, at 04:31, Jeff Bonwick wrote:
Grab a cup of coffee and get comfortable. Ready?
Oh, what a can of worms this was.
Between weblogs and this mailing list, there seems to be a lot of
talk about the whys on the design of ZFS. Being around for the
public 'birth' of a new file
Hello,
Does the work of IEEE's Security in Storage Working Group [1] have
any affect on the design of ZFS's encryption modules? Or do the two
efforts deal with different layers?
Seems that 1619 is more geared towards SAN disks, where 'regular'
file systems tend to sit on and not know
On Sep 13, 2006, at 10:52, Scott Howard wrote:
It's not at all bizarre once you understand how ZFS works. I'd suggest
reading through some of the documentation available at
http://www.opensolaris.org/os/community/zfs/docs/ , in paricular the
Slides available there.
The presentation that
On Jan 2, 2007, at 11:14, Richard Elling wrote:
Don't dispense with proper backups or you will be unhappy. One
of my New Years resolutions is to campaign against unhappiness.
So I would encourage you to explore ways to backup such large
data stores in a timely and economical way.
The Sun
On Jan 3, 2007, at 19:55, Jason J. W. Williams wrote:
performance should be good? I assumed it was an analog to RAID-6. In
our recent experience RAID-5 due to the 2 reads, a XOR calc and a
write op per write instruction is usually much slower than RAID-10
(two write ops). Any advice is
On Jan 24, 2007, at 04:06, Bryan Cantrill wrote:
On Wed, Jan 24, 2007 at 12:15:21AM -0700, Jason J. W. Williams wrote:
Wow. That's an incredibly cool story. Thank you for sharing it! Does
the Thumper today pretty much resemble what you saw then?
Yes, amazingly so: 4-way, 48 spindles, 4u.
On Jan 26, 2007, at 14:43, Gary Mills wrote:
Our Netapp does double-parity RAID. In fact, the filesystem design is
remarkably similar to that of ZFS. Wouldn't that also detect the
error? I suppose it depends if the `wrong sector without notice'
error is repeated each time. Or is it random?
On Jan 29, 2007, at 20:27, Toby Thain wrote:
On 29-Jan-07, at 11:02 PM, Jason J. W. Williams wrote:
I seem to remember the Massive Array of Independent Disk guys ran
into
a problem I think they called static friction, where idle drives
would
fail on spin up after being idle for a long
On Jan 30, 2007, at 09:52, Luke Scharf wrote:
Hey, I can take a double-drive failure now! And I don't even need
to rebuild! Just like having a hot spare with raid5, but without
the rebuild time!
Theoretically you want to rebuild as soon as possible, because
running in degraded mode
On Mar 25, 2007, at 06:14, Thomas Nau wrote:
We use a cluster ;) but in the backend it doesn't solve the sync
problem as you mention
The StorageTek Availability Suite was recently open-sourced:
http://www.opensolaris.org/os/project/avs/
___
On Jun 9, 2007, at 06:14, Richard L. Hamilton wrote:
I wish there was a uniform way whereby applications could
register their ability to achieve or release consistency on demand,
and if registered, could also communicate back that they had
either achieved consistency on-disk, or were unable to
- Online filesystem check
* Very fast offline filesystem check
- Efficient incremental backup and FS mirroring
http://lkml.org/lkml/2007/6/12/242
http://oss.oracle.com/~mason/btrfs/
Via Storage Mojo:
http://storagemojo.com/?p=478
--
David Magda dmagda
On Jun 29, 2007, at 20:51, Stephen Le wrote:
I'm investigating the feasibility of migrating from UFS to ZFS for
a mail-store supporting 20K users. I need separate quotas for all
of my users, which forces me to create separate ZFS file systems
for each user.
Does each and every user have
On Jun 29, 2007, at 23:34, Rob Logan wrote:
eeprom kernelbase=0x8000
or for only 1G userland:
eeprom kernelbase=0x5000
How does eeprom(1M) work on the Xeon that the OP said he has?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Jun 30, 2007, at 17:08, Richard Elling wrote:
Excellent question. The problem with using file system quotas for
a service such as mail store is that you have very little control
over implementation of policies. The only thing the mail service
knows is that a write a mailbox fails.
FWIW I
On Jul 3, 2007, at 11:26, Albert Chin wrote:
PSARC 2007/171 will be available in b68. Any documentation anywhere on
how to take advantage of it?
For those not in the know, PSARC 2007/171 is a separate intent log
for ZFS:
http://cz.opensolaris.org/os/community/arc/caselog/2007/171/
Hello,
Not sure if anyone at Sun can comment on this, but I thought it might
be of interest to the list:
This morning, NetApp filed an IP (intellectual property) lawsuit
against Sun. It has two parts. The first is a “declaratory
judgment”, asking the court to decide whether we infringe
On Sep 6, 2007, at 10:41, [EMAIL PROTECTED] wrote:
Quite; it seems to all be done with blogs.
After Netapp's blog, we now see Sun's CEO enter into the fray:
http://blogs.sun.com/jonathan/entry/on_patent_trolling
And now NetApp's response:
On Sep 7, 2007, at 18:25, Stephen Usher wrote:
(I still have many-many machines on Solaris 8) I can see it
being at least a decade until all the machines we have being at a
level
to handle NFSv4.
If you need to have a Solaris 8 environment, but want to minimize the
number of machines
On Sep 10, 2007, at 13:40, [EMAIL PROTECTED] wrote:
I am not against refactoring solutions, but zfs quotas and the
lack of
user quotas in general either leave people trying to use zfs quotas
in lieu
of user quotas, suggesting weak end runs against the problem (a
cron to
calculate
On Nov 4, 2007, at 00:42, MC wrote:
ZFS has a smb server on the way, but there has been no real public
information about it released. Here is a sample of its existence:
http://www.opensolaris.org/os/community/arc/caselog/
2007/560/;jsessionid=F4061C9308088852992B7DE83CD9C1A3
There's
On Dec 18, 2007, at 12:23, Mike Gerdts wrote:
2) Database files - I'll lump redo logs, etc. in with this. In Oracle
RAC these must live on a shared-rw (e.g. clustered VxFS, NFS) file
system. ZFS does not do this.
If you can use NFS, can't you put things on ZFS and then export?
On Jan 10, 2008, at 07:50, Łukasz K wrote:
As I know AVS doesn't support ZFS - there is a problem with
mounting backup pool.
Two demo movies with AVS and ZFS together were posted a little while
ago:
http://blogs.sun.com/AVS/entry/avs_zfs_the_sndr_replication
On Jan 14, 2008, at 17:15, mike wrote:
On 1/14/08, eric kustarz [EMAIL PROTECTED] wrote:
On Jan 14, 2008, at 11:08 AM, Tim Cook wrote:
www.mozy.com appears to have unlimited backups for 4.95 a month.
Hard to beat that. And they're owned by EMC now so you know they
aren't going anywhere
On Jan 22, 2008, at 18:24, Lori Alt wrote:
ZFS boot supported by the installation software, plus
support for having swap and dump be zvols within
the root pool (i.e., no longer requiring a separate
swap/dump slice), plus various other features, such
as support for failsafe-archive booting.
Hello,
Thought I'd mention a recent (slightly biased) article comparing
DragonflyBSD's new HAMMER file system and ZFS:
Infinite [automatic] snapshots
As-of mounts [like PITR on Postgres]
Clustered
Backups made easy
File database
Huge
On Feb 2, 2008, at 03:16, Jayakrishna wrote:
is it possible to create a ZFS file system boot the Solaris OS
on the ZFS file system. Is this supported ? If not when it will be
supported. Does booting on ZFS partition is supported on any other
platform eg x86.
This was just discussed
On Feb 16, 2008, at 06:43, Ross wrote:
It may not be relevant, but I've seen ZFS add weird delays to
things too. I deleted a file to free up space, but when I checked
no more space was reported. A second or two later the space appeared.
This also happens on FreeBSD's UFS if you have
On Feb 23, 2008, at 10:57, Uwe Dippel wrote:
Come on! Nobody?!
I read through documents for several hours, and obviously done my
work.
Can someone please point me to link, or just unambiguously say
'yes' or 'no' to my question, if ZFS could produce a snapshot of
whatever type,
On Feb 24, 2008, at 01:49, Jonathan Loran wrote:
In some circles, CDP is big business. It would be a great ZFS
offering.
ZFS doesn't have it built-in, but AVS made be an option in some cases:
http://opensolaris.org/os/project/avs/
___
zfs-discuss
On Mar 13, 2008, at 07:33, Darren J Moffat wrote:
Paul B. Henson wrote:
I'm currently prototyping a Solaris file server that will dish out
user
home directories and group project directories via NFSv4 and Samba.
Why not the in kernel CIFS server ?
It's an option, but not everyone wants
On Mar 26, 2008, at 18:45, Vincent Fox wrote:
*sigh*
Last I heard it was going to be build 86. I saw build 85 come out
and thought GREAT only a couple more weeks!
Oh well..
After a little while no one remembers if a product was late or on
time, but everyone remembers if it was
On Apr 12, 2008, at 07:52, Graeme West wrote:
Just a quick question. Is it possible to utilise an Apple Xserve
RAID as an array for use with ZFS with RAID-Z in Solaris?
Yes. In one of our development labs we have the Xraid (now
discontinued) exporting things as a JBOD and using ZFS to tie
On Apr 12, 2008, at 10:23, Graeme West wrote:
Do you happen to have a note of the settings from the Xserve RAID
admin utility in order to present things as a JBOD? It may be
straightforward, I'm not too familiar with the utility.
Not off-hand. It's a separate dev group that tends to do
On Apr 15, 2008, at 13:18, Bob Friesenhahn wrote:
ZFS raidz1 and raidz2 are NOT directly equivalent to RAID5 and RAID6
so the failure statistics would be different. Regardless, single disk
failure in a raidz1 substantially increases the risk that something
won't be recoverable if there is a
On May 31, 2008, at 06:03, Joerg Schilling wrote:
The other method works as root if you use -atime (see man page) and is
available since 13 years.
Would it be possible to assign an RBAC role to a regular user to
accomplish this? If so, would you know which one?
Historically backups ran as
On Jun 28, 2008, at 10:17, Richard Elling wrote:
This week, Verident announced a system using Spansion EcoRAMs
(DRAM + NOR Flash on a DIMM form factor) for main memory.
This is almost getting there, but seems to require some special OS
support, which is not surprising. The holy grail is
On Jun 30, 2008, at 19:19, Jeff Bonwick wrote:
Dump is mandatory in the sense that losing crash dumps is criminal.
Swap is more complex. It's certainly not mandatory. Not so long ago,
swap was typically larger than physical memory.
These two statements kind of imply that dump and swap are
On Jul 10, 2008, at 12:42, Tim wrote:
It's the same reason you don't see HDS or EMC rushing to adjust the
price of
the SYM or USP-V based on Sun releasing the thumpers.
No one ever got fired for buying EMC/HDS/NTAP
I know my company has corporate standards for various aspects of
IT,
On Jul 14, 2008, at 20:49, Bob Friesenhahn wrote:
Any time you see even a single statement which is incorrect, it is
best to ignore that forum poster entirely and if no one corrects
him, then ignore the entire forum.
Yes, because each and every one of us must correct inaccuracies on the
On Aug 24, 2008, at 12:16, Robert wrote:
Since it is said that ZFS can offer unlimited snapshots, I am just
wondering that if there are too many snapshots which consumes too
much space, how can ZFS deal with this?
You tell ZFS to delete the unneeded ones, or you hit 100% disk usage
and
On Sep 30, 2008, at 06:58, Ahmed Kamal wrote:
- I still don't have my original question answered, I want to
somehow assess
the reliability of that zfs storage stack. If there's no hard data
on that,
then if any storage expert who works with lots of systems can give his
impression of
On Sep 30, 2008, at 19:44, Miles Nordin wrote:
There are checksums in the ethernet FCS, checksums in IP headers,
checksums in UDP headers (which are sometimes ignored), and checksums
in TCP (which are not ignored). There might be an RPC layer checksum,
too, not sure.
Not of which helped
On Sep 30, 2008, at 19:09, Tim wrote:
SAS has far greater performance, and if your workload is extremely
random,
will have a longer MTBF. SATA drives suffer badly on random
workloads.
Well, if you can probably afford more SATA drives for the purchase
price, you can put them in a
On Oct 10, 2008, at 15:48, Victor Latushkin wrote:
I've mostly seen (2), because despite all the best practices out
there,
single vdev pools are quite common. In all such cases that I had my
hands on it was possible to recover pool by going back by one or two
txgs.
For better or worse
On Oct 16, 2008, at 15:20, Marion Hakanson wrote:
For the stated usage of the original poster, I think I would aim
toward
turning each of the Thumpers into an NFS server, configure the head-
node
as a pNFS/NFSv4.1
It's a shame that Lustre isn't available on Solaris yet either.
Lori Alt gave an informative presentation (40 min.) on how ZFS booting
works in Solaris 10 Update 6 (10/08):
http://blogs.sun.com/storage/entry/zfs_boot_in_solaris_10
The audio seems to be mono and focused on the left channel (or I'm
having an aneurism of some kind).
Two questions that
On Oct 31, 2008, at 13:13, Richard Elling wrote:
Paul Kraus wrote:
Is there a ufsdump equivalent for ZFS ? For home use I really don't
want to have to buy a NetBackup license.
No, and it is unlikely to happen. To some degree, ufsdump existed
because of deficiencies in other copy
Answering myself because I've gotten things to work, but it's a mystery as
to why they're working (I have a Sun case number if anyone at Sun.com is
interested).
Steps:
1. Try to create a pool on a pseudo-device:
# zpool create mypool emcpower0a
This receives an I/O error (see
On Nov 10, 2008, at 13:55, Tim wrote:
Just got an email about this today. Fishworks finally unveiled?
http://www.sun.com/launch/2008-1110/index.jsp
Looks like it:
http://blogs.sun.com/fishworks/entry/launch_blogs
http://blogs.sun.com/main/tags/sunstorage7000
On Nov 10, 2008, at 14:05, Eric Schrock wrote:
If you want to give it a spin, be sure to check out the freely
available VM images.
Took a bit of digging, but the VMware image is at:
http://www.sun.com/storage/disk_systems/unified_storage/resources.jsp
On Thu, November 20, 2008 14:43, Krenz von Leiberman wrote:
I have a computer that contains 8 1TB drives. I have 3 USB drives that are
750MB. So that's a total of 11 drives. Can I install solaris or
opensolaris on the first harddrive and mirror it on the second harddrive?
Can I then use the 9
On Nov 24, 2008, at 17:32, Tim wrote:
On Mon, Nov 24, 2008 at 2:22 PM, Ahmed Kamal
[EMAIL PROTECTED] wrote:
Not sure if this is the best place to ask, but do Sun's new Amber
road
storage boxes have any kind of integration with ESX? Most
importantly,
quiescing the VMs, before
On Jan 6, 2009, at 14:21, Rob wrote:
Obviously ZFS is ideal for large databases served out via
application level or web servers. But what other practical ways are
there to integrate the use of ZFS into existing setups to experience
it's benefits.
Remember that ZFS is made up of the ZPL
There has been a partial port of ZFS to NetBSD. Announcement at:
http://mail-index.netbsd.org/tech-kern/2009/01/11/msg003992.html
It does not do anything useful yet, but it is a start.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Jan 28, 2009, at 16:39, Miles Nordin wrote:
Oxford 911 seems to describe a brand of chips, not a specific chip,
but it's been a good brand, and it's a very old brand for firewire.
As an added bonus this chipset allow multiple logins so it can be
used to experiment with this like Oracle
On Jan 29, 2009, at 18:02, Peter Reiher wrote:
Does ZFS currently support actual use of extended attributes? If
so, where can I find some documentation that describes how to use
them?
Your best bet would probably be:
http://search.sun.com/docs/index.jsp?qt=zfs+extended+attributes
Is
On Feb 8, 2009, at 16:12, Vincent Fox wrote:
Do you think having log on a 15K RPM drive with the main pool
composed of 10K RPM drives will show worthwhile improvements? Or am
I chasing a few percentage points?
Another important question is whether it would be sufficient to
purchase
On Tue, February 17, 2009 01:50, Marion Hakanson wrote:
Note that the only available pool failure mode in the presence of a SAN
I/O error for these OS's has been to panic/reboot, but so far when the
systems have come back, data has been fine. We also do tape backups
of these pools, of
On Feb 17, 2009, at 17:56, Joe S wrote:
Does that sound like a viable backup solution?
It has been explicitly stated numerous times that the output of 'zfs
send' has no guarantees and it is undocumented. From zfs(1M):
The format of the [zfs send] stream is evolving. No backwards
On Feb 17, 2009, at 21:35, Scott Lawson wrote:
Everything we have has dual power supplies, feed from dual power
rails, feed from separate switchboards, through separate very large
UPS's, backed by generators, feed by two substations and then cloned
to another data center 3 km away. HA
On Feb 26, 2009, at 14:20, Tomas Ögren wrote:
Rsync supports POSIX Draft ACLs (UFS/NFSv3), but not ZFS ACLs..
However,
you can do a sneaky thing.. Mount your ZFS filesystem over NFSv4 from
yourself and rsync -A from /ufsthingie/ to /nfs4mountedzfs/ .. that
will copy all ACLs..
On Feb 26,
On Feb 27, 2009, at 18:23, C. Bergström wrote:
Blake wrote:
Care to share any of those in advance? It might be cool to see input
from listees and generally get some wheels turning...
raidz boot support in grub 2 is pretty high on my list to be honest..
Which brings up another question of
On Feb 27, 2009, at 20:02, Richard Elling wrote:
It wouldn't help. zfs send is a data stream which contains parts of
files,
not files (in the usual sense), so there is no real way to take a send
stream and extract a file, other than by doing a receive.
If you create a non-incremental
On Mar 2, 2009, at 19:31, David wrote:
So nobody is interested in Raidz grow support? i.e. you have 4 disks
is a
raidz and you only have room for a 5th disk(physically), so you add
the 5th
disk to the raidz. It would be a great feature for a home server and
its the
only thing stopping
On Mar 2, 2009, at 18:37, Miles Nordin wrote:
And I'm getting frustrated pointing out these issues for the 10th
time [...]
http://www.xkcd.com/386/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Mar 3, 2009, at 20:51, Richard Elling wrote:
This seems unusual, unless the EMC is mismatched wrt how they may have
implemented cache flush. The issues around this are described in
the Evil
Tuning Guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes
In May of last year dynamic LUN expansion went into OpenSolaris:
PSARC 2006/373 Dynamic Lun Expansion
http://opensolaris.org/os/community/on/flag-days/86-90/
Has that been ported to Solaris 10 yet? (Don't see anything in the u5
or u6 notes about it.) If it hasn't, can anyone
On Mar 11, 2009, at 20:14, mike wrote:
http://www.intel.com/products/server/storage-systems/ssr212mc2/ssr212mc2-overview.htm
http://www.intel.com/support/motherboards/server/ssr212mc2/index.htm
It's hard to use the HAL sometimes.
I am trying to locate chipset info but having a hard time...
On Mar 11, 2009, at 21:59, mike wrote:
On Wed, Mar 11, 2009 at 6:53 PM, David Magda dma...@ee.ryerson.ca
wrote:
If you know someone who already has the hardware, you can ask them
to run
the Sun Device Detection Tool:
http://www.sun.com/bigadmin/hcl/hcts/device_detect.jsp
It runs
On Mar 18, 2009, at 12:43, Bob Friesenhahn wrote:
POSIX does not care about disks or filesystems. The only
correct behavior is for operations to be applied in the order that
they are requested of the operating system. This is a core function
of any operating system. It is therefore ok
On Mar 29, 2009, at 00:41, Michael Shadle wrote:
Well I might back up the more important stuff offsite. But in theory
it's all replaceable. Just would be a pain.
And what is the cost of the time to replace it versus the price of a
hard disk? Time ~ money.
There used to be a time when I
On Mar 29, 2009, at 12:40, Frank Middleton wrote:
So what is best if you get a 4th drive for a 3 drive raidz? Is it
better to keep it separate and use it for backups of the replaceable
data (perhaps on a different machine), as a hot spare, second parity,
or something else? Seems so un-green to
On Mar 29, 2009, at 12:17, Michael Shadle wrote:
Okay so riddle me this - can I create a raidz2 using the new disks
and move all the data from the existing zdev to it. Then recreate a
raidz2 this time using the old 7 disks?
And have them all stay in the same Zpool?
Yes, I believe so.
On Mar 29, 2009, at 13:24, Bob Friesenhahn wrote:
With so few drives it does not make sense to use raidz2, and
particularly since raidz2 still does not protect against user error,
OS bugs, severe over-voltage from a common power supply, or
meteorite strike.
I remember reading on this
On Mar 29, 2009, at 16:37, Michael Shadle wrote:
On Sun, Mar 29, 2009 at 10:35 AM, David Magda dma...@ee.ryerson.ca
wrote:
Create new pool, move data to it (zfs send/recv), destroy old RAID-
Z1 pool.
Would send/recv be more efficient than just a massive rsync or
related?
Also I'd have
On Mar 30, 2009, at 13:48, Michael Shadle wrote:
My only question is is how long it takes to resilver... Supposedly
the entire array has to be checked which means 6x1.5tb. It has a
quad core CPU that's basically dedicated to it. Anyone have any
estimates?
Sounds like it is a lot slower
On Mar 30, 2009, at 19:13, Michael Shadle wrote:
Normally it seems like raid5 is perfectly fine for a workoad like this
but maybe I'd sleep better at night knowing I could have 2 disks fail,
but the odds of that are pretty slim. I've never had 2 disks fail, and
if I did, the whole array is
On Mar 31, 2009, at 04:31, Scott Lawson wrote:
http://blogs.sun.com/ahl/entry/expand_o_matic_raid_z
There's a more recent post on bp (block pointer) rewriting that will
allow for moving blocks around (part of cleaning up the scrub code):
http://blogs.sun.com/ahrens/entry/new_scrub_code
On Apr 7, 2009, at 16:43, OpenSolaris Forums wrote:
if you have a snapshot of your files and rsync the same files again,
you need to use --inplace rsync option , otherwise completely new
blocks will be allocated for the new files. that`s because rsync
will write entirely new file and
On Apr 18, 2009, at 11:27, Richard Elling wrote:
The win is nonvolatile main memory. When we get this on a large,
fast scale (and it will happen in our lifetime :-) then we can begin
to forget about file systems, with an interim step through ramdisks.
Tape is dead, Disk is tape, Flash is
Looking at the web site for Sun's SSD storage products, it looks like
what's been offered is the so-called Logzilla:
http://www.sun.com/storage/flash/specs.jsp
This is the unit that is used with the zpool add log devX command.
Are there any plans to add Readzilla offerings to the
On Apr 19, 2009, at 16:48, Richard Elling wrote:
David Magda wrote:
Also, on the currently available SSD, there's a 'Try and Buy' logo:
would it be possible to 'borrow' one of these SSDs for the sixty
days and run benchmarks, and if it doesn't help, remove them from
the ZFS pool where
On Apr 19, 2009, at 12:52, dick hoogendijk wrote:
You need redundancy and you don't get that on a single drive. A
sound use of ZFS needs it.
Not quite the same, but...
zfs set copies=2 myzfsfs ?
___
zfs-discuss mailing list
On Mon, April 27, 2009 02:13, Tomas Ögren wrote:
On 26 April, 2009 - Gary Mills sent me these 1,3K bytes:
I prefer NFS too, but the IMAP server requires POSIX semantics.
I believe that NFS doesn't support that, at least NFS version 3.
What non-POSIXness are you referring to, or is it just
On Apr 28, 2009, at 18:02, Richard Elling wrote:
Kees Nuyt wrote:
Some high availablility storage systems overcome this decay
by not just reading, but also writing all blocks during a
scrub. In those systems, scrubbing is done semi-continously
in the background, not on user/admin demand.
On May 6, 2009, at 20:46, Bob Friesenhahn wrote:
After all this discussion, I am not sure if anyone adequately
answered the original poster's question as to whether at 2540 with
SAS 15K drives would provide substantial synchronous write
throughput improvement when used as a L2ARC device.
On May 16, 2009, at 05:04, Ian Collins wrote:
dick hoogendijk wrote:
If I wanted to backup a server with non-global zones (all on zfs
filesystems) with zfs send I guess I don't have to halt the zones
first, because I create snapshots to send from. Is that right?
Yes.
To expand on that
On Jun 11, 2009, at 05:44, Paul van der Zwan wrote:
Strange thing I noticed in the keynote is that they claim the disk
usage of Snow Leopard is 6 GB less than Leopard mostly because of
compression.
It's probably 6 GB because Leopard (10.5) ran on both Intel and
PowerPC chips (Universal
On Tue, June 16, 2009 15:32, Kyle McDonald wrote:
So the cache saves not only the time to access the disk but also the CPU
time to decompress. Given this, I think it could be a big win.
Unless you're in GIMP working on JPEGs, or doing some kind of MPEG video
editing--or ripping audio (MP3 /
On Jun 16, 2009, at 17:47, Scott Meilicke wrote:
I think (don't quote me) that ESX can only mount 64 iSCSI targets,
so you aren't much better off. But, COMSTAR (2009.06) exports a
single iSCSI target with multiple LUNs, so that gets around the
limitation. I could be all wet on this one,
On Wed, June 17, 2009 06:15, Fajar A. Nugraha wrote:
Perhaps compressing /usr could be handy, but why bother enabling
compression if the majority (by volume) of user data won't do
anything but burn CPU?
How do you define substantial? My opensolaris snv_111b installation
has 1.47x
On Wed, June 24, 2009 08:42, Philippe Schwarz wrote:
In my tests ESX4 seems to work fine with this, but i haven't already
stressed it ;-)
Therefore, i don't know if the 1Gb FDuplex per port will be enough, i
don't know either i'have to put sort of redundant access form ESX to
SAN,etc
On Jun 24, 2009, at 16:54, Philippe Schwarz wrote:
Out of curiosity, any reason why went with iSCSI and not NFS? There
seems
to be some debate on which is better under which circumstances.
iSCSI instead of NFS ?
Because of the overwhelming difference in transfer rate between
them, In
On Jun 30, 2009, at 14:08, Bob Friesenhahn wrote:
I have seen UPSs help quite a lot for short glitches lasting
seconds, or a minute. Otherwise the outage is usually longer than
the UPSs can stay up since the problem required human attention.
A standby generator is needed for any long
On Jul 1, 2009, at 12:37, Victor Latushkin wrote:
This issue (and previous one reported by Tom) has got some publicity
recently - see here
http://www.uknof.org.uk/uknof13/Bird-Redux.pdf
Joyent also had issues a while back as well:
http://tinyurl.com/ytyzs6
On Jul 4, 2009, at 14:30, Miles Nordin wrote:
yes, which is why it's worth suspecting knfsd as well. However I
don't think you can sell a Solaris system that performs 1/3 as well on
better hardware without a real test case showing the fast system's
broken.
It should be noted that RAID-0
On Wed, July 8, 2009 11:55, Jim Klimov wrote:
My typical runs between Unix hosts look like:
solaris# cd /pool/dumpstore/databases
solaris# while ! rsync -vaP --stats --exclude='*.bak' --exclude='temp'
--partial --append source:/DUMP/snapshots/mysql . ; do sleep 5; echo
= `date`: RETRY;
1 - 100 of 357 matches
Mail list logo