Back in April, I pinged this list[1] for help in specifying a ZFS server
that would handle high-capacity reads and writes. That server was
finally built and delivered, and I've blogged the results[2] as part of
a larger series[3] about that server.
[1]
I am running Solaris U4 x86_64.
Seems that something is changed regarding mdb:
# mdb -k
Loading modules: [ unix krtld genunix specfs dtrace cpu.AuthenticAMD.15 uppc
pcplusmp ufs ip hook neti sctp arp usba fctl nca lofs zfs random nfs sppp
crypto ptm ]
arc::print -a c_max
mdb: failed to
I will only comment on the chassis, as this is made by AIC (short for
American Industrial Computer), and I have three of these in service at
my work. These chassis are quite well made, but I have experienced
the following two problems:
snip
Oh my, thanks for the heads-up! Charlie at
Kent Watsen wrote:
I'm putting together a OpenSolaris ZFS-based system and need help
picking hardware.
Fun exercise! :)
I'm thinking about using this 26-disk case: [FYI: 2-disk RAID1 for the
OS 4*(4+2) RAIDZ2 for SAN]
What are you *most* interested in for this server? Reliability?
On Fri, 14 Sep 2007, Sergey wrote:
I am running Solaris U4 x86_64.
Seems that something is changed regarding mdb:
# mdb -k
Loading modules: [ unix krtld genunix specfs dtrace cpu.AuthenticAMD.15 uppc
pcplusmp ufs ip hook neti sctp arp usba fctl nca lofs zfs random nfs sppp
crypto ptm ]
Fun exercise! :)
Indeed! - though my wife and kids don't seem to appreciate it so much ;)
I'm thinking about using this 26-disk case: [FYI: 2-disk RAID1 for
the OS 4*(4+2) RAIDZ2 for SAN]
What are you *most* interested in for this server? Reliability?
Capacity? High Performance?
Short question:
I'm curious as to how ZFS manages space (free and used) and how
its usage interacts with thin provisioning provided by HDS
arrays. Is there any effort to minimize the number of provisioned
disk blocks that get writes so as to not negate any space
benefits that thin provisioning
Kent Watsen wrote:
What are you *most* interested in for this server? Reliability?
Capacity? High Performance? Reading or writing? Large contiguous reads
or small seeks?
One thing that I did that got a good feedback from this list was
picking apart the requirements of the most demanding
I have a huge problem with space maps on thumper. Space maps takes over 3GB
and write operations generates massive read operations.
Before every spa sync phase zfs reads space maps from disk.
I decided to turn on compression for pool ( only for pool, not filesystems )
and it helps.
Now space
comments from a RAS guy below...
Adam Lindsay wrote:
Kent Watsen wrote:
What are you *most* interested in for this server? Reliability?
Capacity? High Performance? Reading or writing? Large contiguous reads
or small seeks?
One thing that I did that got a good feedback from this list was
Just checking status on the resilver/scrub + snap reset issue-- it is very
painful for large pools such as exist on thumpers that make heavy use of
snaps. Is this still on track for u5/pre-u5 or has it changed? Is there a
different view of these bugs with more information so I do no need to
Mike Gerdts wrote:
Short question:
Not so short really :-)
Answers to som questions inline. I think others will correct me if I'm
wrong.
I'm curious as to how ZFS manages space (free and used) and how
its usage interacts with thin provisioning provided by HDS
arrays. Is there any effort
Mike Gerdts wrote:
I'm curious as to how ZFS manages space (free and used) and how
its usage interacts with thin provisioning provided by HDS
arrays. Is there any effort to minimize the number of provisioned
disk blocks that get writes so as to not negate any space
benefits that thin
On 9/14/07, Moore, Joe [EMAIL PROTECTED] wrote:
I was trying to compose an email asking almost the exact same question,
but in the context of array-based replication. They're similar in the
sense that you're asking about using already-written space, rather than
to go off into virgin sectors
Is there a way to convert a 2 disk raid-z file system to a mirror without
backing up the data and restoring?
We have this:
bash-3.00# zpool status
pool: archives
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
archivesONLINE 0
I suspect it's probably not a good idea but I was wondering if someone
could clarify the details.
I have 4 250G SATA(150) disks and 1 250G PATA(133) disk. Would it
cause problems if I created a raidz1 pool across all 5 drives?
I know the PATA drive is slower so would it slow the access across
Won't come cheap, but this mobo comes with 6x pci-x slots... should get the job
done :)
http://www.supermicro.com/products/motherboard/Xeon1333/5000P/X7DBE-X.cfm
This message posted from opensolaris.org
___
zfs-discuss mailing list
Hello Brian,
On Fri, Sep 14, 2007 at 11:45:27AM -0700, Brian King wrote:
Is there a way to convert a 2 disk raid-z file system to a mirror without
backing up the data and restoring?
Currently there isn't a way to do this without having some
additional buffer disk space available.
What you
I’d like to report the ZFS related crash/bug described below. How do I go about
reporting the crash and what additional information is needed?
I’m using my own very simple test app that creates numerous directories and
files of randomly generated data. I have run the test app on two machines,
Please see the following link:
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache
Hth,
Victor
Sergey пишет:
I am running Solaris U4 x86_64.
Seems that something is changed regarding mdb:
# mdb -k
Loading modules: [ unix krtld genunix specfs
Go look at intel - they have a pretty decent mb with 6 sata ports
Tim Cook wrote:
Won't come cheap, but this mobo comes with 6x pci-x slots... should get the
job done :)
http://www.supermicro.com/products/motherboard/Xeon1333/5000P/X7DBE-X.cfm
This message posted from opensolaris.org
Kent Watsen wrote:
Getting there - can anybody clue me into how much CPU/Mem ZFS needs?
I have an old 1.2Ghz with 1Gb of mem laying around - would it be sufficient?
It'll use as much memory as you can spare and it has a strong preference
for 64 bit systems. Considering how much you
On Sep 14, 2007, at 8:16 AM, Łukasz wrote:
I have a huge problem with space maps on thumper. Space maps takes
over 3GB
and write operations generates massive read operations.
Before every spa sync phase zfs reads space maps from disk.
I decided to turn on compression for pool ( only for
Paul Kraus wrote:
In the ZFS case I could replace the disk
and the zpool would
resilver automatically. I could also take the
removed disk and put it
into the second system and have it recognize the
zpool (and that it
was missing half of a mirror) and the data was all
there.
Tim Cook wrote:
Won't come cheap, but this mobo comes with 6x pci-x slots... should get the
job done :)
http://www.supermicro.com/products/motherboard/Xeon1333/5000P/X7DBE-X.cfm
Yes, but where do you buy SuperMicro toys?
SuperMicro doesn't sell online, anything neat that I've found is not
Tim Foster wrote:
Hi Joe,
On Thu, 2007-09-13 at 11:39 -0400, Poulos, Joe wrote:
Is there a way to automate the destroying of snapshots that are older
that x amount of days?
Yeah, a few people have done stuff like this.
Chris has this:
26 matches
Mail list logo