Hi: (Warning, new zfs user question)
I am setting up an X4500 for our small engineering site file server.
It's mostly for builds, images, doc archives, certain workspace
archives, misc
data.
I'd like a trade off between space and safety of data. I have not set
up a large
ZFS system
Jason J. W. Williams wrote:
Hi All,
This is a bit off-topic...but since the Thumper is the poster child
for ZFS I hope its not too off-topic.
What are the actual origins of the Thumper? I've heard varying stories
in word and print. It appears that the Thumper was the original server
I have an 800GB raidz2 zfs filesystem. It already has approx 142Gb of data.
Can I simply turn on compression at this point, or do you need to start
with compression
at the creation time? If I turn on compression now, what happens to the
existing data?
Thanks,
Neal
I've got my first server deployment with ZFS.
Consolidating a pair of other file servers that used to have
a dozen or so NFS exports in /etc/dfs/dfstab similar to;
/export/solaris/images
/export/tools
/export/ws
. and so on
For the new server, I have one large zfs pool;
-bash-3.00# df
Neal Pollack wrote:
I've got my first server deployment with ZFS.
Consolidating a pair of other file servers that used to have
a dozen or so NFS exports in /etc/dfs/dfstab similar to;
/export/solaris/images
/export/tools
/export/ws
. and so on
For the new server, I have one large zfs
Tom Kimes wrote:
Here's a start for a suggested equipment list:
Lian Li case with 17 drive bays (12 3.5 , 5 5.25)
http://www.newegg.com/Product/Product.aspx?Item=N82E1682064
So it only has room for one power supply. How many disk drives will you
be installing?
It's not the steady
Alec Muffett wrote:
Does anyone on this list have experience with a recent board with 6 or more
SATA ports that they know is supported?
Well so far I have only populated 5 of the ports I have available,
but my writeup with my 9-port SATS ASUS mobo is at:
Anon wrote:
Have the ICH-8 and ICH-9 been physically tested with Solaris? The page for
the ACHI driver still only lists through ICH-6 as having support? What is
the Solaris support for the rest of the ICH-9 chipset such as USB, etc.?
This message posted from opensolaris.org
Ian Collins wrote:
[EMAIL PROTECTED] wrote:
If power consumption and heat is a consideration, the newer Intel CPUs
have an advantage in that Solaris supports native power management on
those CPUs.
Are P35 chipset boards supported?
The P35 chipset works fine with Solaris.
Ed Saipetch wrote:
Hello,
I'm experiencing major checksum errors when using a syba silicon image 3114
based pci sata controller w/ nonraid firmware. I've tested by copying data
via sftp and smb. With everything I've swapped out, I can't fathom this
being a hardware problem.
I can.
Edward Saipetch wrote:
Neal Pollack wrote:
Ed Saipetch wrote:
Hello,
I'm experiencing major checksum errors when using a syba silicon
image 3114 based pci sata controller w/ nonraid firmware. I've
tested by copying data via sftp and smb. With everything I've
swapped out, I can't
I'm running Nevada build 81 on x86 on an Ultra 40.
# uname -a
SunOS zbit 5.11 snv_81 i86pc i386 i86pc
Memory size: 8191 Megabytes
I started with this zfs pool many dozens of builds ago, approx a year ago.
I do live upgrade and zfs upgrade every few builds.
When I have not accessed the zfs file
For the last few builds of Nevada, if I come back to my workstation after
long idle periods such as overnight, and try any command that would touch
the zfs filesystem, it hangs for an entire 60 seconds approximately.
This would include ls, zpool status, etc.
Does anyone has a hint as to how I
Tomas Ă–gren wrote:
On 27 March, 2008 - Neal Pollack sent me these 1,9K bytes:
Also given: I have been doing live upgrade every other build since
approx Nevada build 46. I am running on a Sun Ultra 40 modified
to include 8 disks. (second backplane and SATA quad cable)
It appears
Andrius wrote:
dick hoogendijk wrote:
On Mon, 16 Jun 2008 18:10:14 +0100
Andrius [EMAIL PROTECTED] wrote:
zpool does not to create a pool on USB disk (formatted in FAT32).
It's already been formatted.
Try zpool create -f alpha c5t0d0p0
The same story
#
Andrius wrote:
Neal Pollack wrote:
Andrius wrote:
dick hoogendijk wrote:
On Mon, 16 Jun 2008 18:10:14 +0100
Andrius [EMAIL PROTECTED] wrote:
zpool does not to create a pool on USB disk (formatted in FAT32).
It's already been formatted.
Try zpool create -f alpha c5t0d0p0
Lida Horn wrote:
Richard Elling wrote:
There are known issues with the Marvell drivers in X4500s. You will
want to pay attention to the release notes, SRDBs, InfoDocs, and SunAlerts
for the platform.
dick hoogendijk wrote:
I read this just now in the Unix Guardian:
quote
BTRFS, pronounced ButterFS:
BTRFS was launched in June 2007, and is a POSIX-compliant file system
that will support very large files and volumes (16 exabytes) and a
ridiculous number of files (two to the power of 64
Rahul wrote:
hi
can you give some disadvantages of the ZFS file system??
Yes, it's too easy to administer.
This makes it rough to charge a lot as a sysadmin.
All the problems, manual decisions during fsck and data recovery,
head-aches after a power failure or getting disk drives mixed up
;
http://www.sun.com/bigadmin/features/articles/nvm_boot.jsp
I hope this helps.
Cheers,
Neal Pollack
Any further information welcome.
Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Bob Friesenhahn wrote:
SSDs + ZFS - a marriage made in (computer) heaven!
Where's the beef?
I sense a lot of smoke and mirrors here, similar to Intel's recent CPU
announcements which don't even reveal the number of cores. No
prices and funny numbers that the writers of technical
Running Nevada build 95 on an ultra 40.
Had to replace a drive.
Resilver in progress, but it looks like each
time I do a zpool status, the resilver starts over.
Is this a known issue?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 09/17/08 02:29 PM, [EMAIL PROTECTED] wrote:
Are you doing snaps?
No, no snapshots ever.
Logged in as root to do;
zpool replace poolname deaddisk
and then did a few zpool status
as root. It restarted each time.
If so unless you have the new bits to handle the
issue, each snap restarts
Erik Trimble wrote:
I was under the impression that MLC is the preferred type of SSD, but I
want to prevent myself from having a think-o.
I'm looking to get (2) SSD to use as my boot drive. It looks like I can
get 32GB SSDs composed of either SLC or MLC for roughly equal pricing.
Which
Tim wrote:
On Wed, Sep 24, 2008 at 1:41 PM, Erik Trimble [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
I was under the impression that MLC is the preferred type of SSD,
but I
want to prevent myself from having a think-o.
I'm looking to get (2) SSD to use as my boot
On 09/24/08 10:57 PM, Jeff Bonwick wrote:
It's almost certainly the SIL3114 controller.
Google SIL3114 data corruption -- it's nasty.
I've also in the past had the misfortune of experiencing
Silicon Image. My corruption was with other file types
and not even ZFS. Silicon Image is
Tom Servo wrote:
How can I diagnose why a resilver appears to be hanging at a certain
percentage, seemingly doing nothing for quite a while, even though the
HDD LED is lit up permanently (no apparent head seeking)?
The drives in the pool are WD Raid Editions, thus have TLER and should
time
On 10/22/08 09:02 AM, Andrew Gallatin wrote:
Johan Hartzenberg wrote:
Reboot to the grub menu
Move to the failsafe kernel entry
Ugh. This is OpenSolaris (Indiana), and there *is* no failsafe
as far as I can tell. There is one grub entry for Solaris:
#-- ADDED BY BOOTADM - DO
On 11/03/08 13:18, Philip Brown wrote:
Ok, I think I understand. You're going to be told
that ZFS send isn't a backup (and for these purposes
I definately agree), ...
Hmph. well, even for 'replication' type purposes, what I'm talking about is
quite useful.
Picture two remote systems,
On 11/07/08 11:24, Kumar, Amit H. wrote:
Is ZFS already the default file System for Solaris 10?
If yes has anyone tested it on Thumper ??
Yes. Formal Sun support is for Thumper running s10. For the latest
ZFS bug fixes, it is important to run the most recent s10 update release.
Right now,
On 02/23/09 20:24, Ilya Tatar wrote:
Hello,
I am building a home file server and am looking for an ATX mother
board that will be supported well with OpenSolaris (onboard SATA
controller, network, graphics if any, audio, etc). I decided to go for
Intel based boards (socket LGA 775) since it
I'm setting up a new X4500 Thumper, and noticed suggestions/blogs
for setting up two boot disks as a zfs rpool mirror during installation.
But I can't seem to find instructions/examples for how to do this using
google, the blogs, or the Sun docs for X4500.
Can anyone share some instructions for
the mapping is not matching s10 or the docs?
Cheers,
Neal
.
.
.
On our lab Thumper, they are c5t0 and c4t0.
Cindy
Neal Pollack wrote:
I'm setting up a new X4500 Thumper, and noticed suggestions/blogs
for setting up two boot disks as a zfs rpool mirror during installation.
But I can't seem to find
On 03/18/09 10:43 AM, Tim wrote:
On Wed, Mar 18, 2009 at 12:14 PM, Richard Elling
richard.ell...@gmail.com mailto:richard.ell...@gmail.com wrote:
Tim wrote:
Just an observation, but it sort of defeats the purpose of
buying sun hardware with sun software if you can't even
On 03/18/09 11:09 AM, Tim wrote:
On Wed, Mar 18, 2009 at 12:49 PM, Neal Pollack neal.poll...@sun.com
mailto:neal.poll...@sun.com wrote:
On 03/18/09 10:43 AM, Tim wrote:
On Wed, Mar 18, 2009 at 12:14 PM, Richard Elling
richard.ell...@gmail.com mailto:richard.ell...@gmail.com
Hi:
What is the most common practice for allocating (choosing) the two disks
used for
the boot drives, in a zfs root install, for the mirrored rpool?
The docs for thumper, and many blogs, always point at cfgadm slots 0 and 1,
which are sata3/0 and sata/3/4, which most often map to c5t0d0 and
On 06/16/09 02:39 PM, roland wrote:
so, we have a 128bit fs, but only support for 1tb on 32bit?
i`d call that a bug, isn`t it ? is there a bugid for this? ;)
Well, opinion is welcome.
I'd call it an RFE.
With 64 bit versions of the CPU chips so inexpensive these days,
how much money do
On 06/16/09 03:22 PM, Ray Van Dolson wrote:
On Tue, Jun 16, 2009 at 03:16:09PM -0700, milosz wrote:
yeah i pretty much agree with you on this. the fact that no one has
brought this up before is a pretty good indication of the demand.
there are about 1000 things i'd rather see fixed/improved
On 06/30/09 03:00 AM, Andre van Eyssen wrote:
On Tue, 30 Jun 2009, Monish Shah wrote:
The evil tuning guide says The ZIL is an essential part of ZFS and
should never be disabled. However, if you have a UPS, what can go
wrong that really requires ZIL?
Without addressing a single
On 07/21/09 03:00 PM, Nicolas Williams wrote:
On Tue, Jul 21, 2009 at 02:45:57PM -0700, Richard Elling wrote:
But to put this in perspective, you would have to *delete* 20 GBytes
Or overwrite (since the overwrites turn in to COW writes of new blocks
and the old blocks are released if
On 07/23/09 09:19 AM, Richard Elling wrote:
On Jul 23, 2009, at 5:42 AM, F. Wessels wrote:
Hi,
I'm using asus m3a78 boards (with the sb700) for opensolaris and m2a*
boards (with the sb600) for linux some of them with 4*1GB and others
with 4*2Gb ECC memory. Ecc faults will be detected and
On 07/31/09 06:12 PM, Jorgen Lundman wrote:
Finding a SATA card that would work with Solaris, and be hot-swap, and
more than 4 ports, sure took a while. Oh and be reasonably priced ;)
Let's take this first point; card that works with Solaris
I might try to find some engineers to write
On 08/25/09 05:29 AM, Gary Gendel wrote:
I have a 5-500GB disk Raid-Z pool that has been producing checksum errors right
after upgrading SXCE to build 121. They seem to be randomly occurring on all 5
disks, so it doesn't look like a disk failure situation.
Repeatingly running a scrub on the
On 08/25/09 10:46 PM, Tim Cook wrote:
On Wed, Aug 26, 2009 at 12:22 AM, thomas tjohnso...@gmail.com
mailto:tjohnso...@gmail.com wrote:
I'll admit, I was cheap at first and my
fileserver right now is consumer drives. nbsp;You
can bet all my future purchases will be of the
http://www.dailytech.com/Startup+Drops+Bombshell+Lightning+SSD+With+180k+IOPS+500320+MBs+ReadWrites/article16249.htm
Pliant Technologies
just released two Lightning high performance enterprise SSDs that threaten to blow
away the competition. The drives uses proprietary ASICs to deliver an
45 matches
Mail list logo