On 11/29/12 10:56 AM, Jim Klimov wrote:
For example, I might want to have corporate webshop-related
databases and appservers to be the fastest storage citizens,
then some corporate CRM and email, then various lower priority
zones and VMs, and at the bottom of the list - backups.
AFAIK, now such
On 12/ 2/12 03:24 AM, Nikola M. wrote:
It is using Solaris Zones and throttling their disk usage on that level,
so you separate workload processes on separate zones.
Or even put KVM machines under the zones (Joyent and OI support
Joyent-written KVM/Intel implementation in Illumos) for the same
On 12/ 2/12 05:19 AM, Richard Elling wrote:
On Dec 1, 2012, at 6:54 PM, Nikola M. minik...@gmail.com wrote:
On 12/ 2/12 03:24 AM, Nikola M. wrote:
It is using Solaris Zones and throttling their disk usage on that level,
so you separate workload processes on separate zones.
Or even put KVM
On 12/27/11 09:20 PM, Frank Cusack wrote:
http://sparcv9.blogspot.com/2011/12/solaris-11-illumos-and-source.html
If I upgrade ZFS to use the new features in Solaris 11 I will be
unable to import my pool using the free ZFS implementation that is
available in illumos based
On 04/26/11 01:56 AM, Lamp Zy wrote:
Hi,
One of my drives failed in Raidz2 with two hot spares:
What are zpool/zfs versions? (zpool upgrade Ctrl+c, zfs upgrade Cttr+c).
Latest zpool/zfs versions available by numerical designation in all
OpenSolaris based distributions, are zpool 28 and zfs v.
I am forwarding this to openindiana-disc...@openindiana.org list,
with hope of wider audience regarding question.
Original Message
Message-ID: 4db68e08.9040...@googlemail.com
Date: Tue, 26 Apr 2011 11:19:04 +0200
From: achim...@googlemail.com achim...@googlemail.com
On 04/ 6/11 07:14 PM, Brandon High wrote:
On Tue, Apr 5, 2011 at 12:38 PM, Joe Auty j...@netmusician.org
mailto:j...@netmusician.org wrote:
How about getting a little more crazy... What if this entire
server temporarily hosting this data was a VM guest running ZFS? I
don't
On 03/23/11 09:07 AM, Pawel Jakub Dawidek wrote:
On Sun, Mar 20, 2011 at 01:54:54PM +0700, Fajar A. Nugraha wrote:
On Sun, Mar 20, 2011 at 4:05 AM, Pawel Jakub Dawidekp...@freebsd.org wrote:
On Fri, Mar 18, 2011 at 06:22:01PM -0700, Garrett D'Amore wrote:
Newer versions of FreeBSD have newer
On 03/19/11 12:17 AM, Toby Thain wrote:
On 18/03/11 5:56 PM, Paul B. Henson wrote:
We've been running Solaris 10 for the past couple of years, primarily to
leverage zfs to provide storage for about 40,000 faculty, staff, and
students ... and at this point want to start reevaluating our best
Orvar Korvar wrote:
Sometimes you read about people having low performance deduping: it is
because they have too little RAM.
I mostly heard they have low performance when they start deleting
deduplicated data, not before that.
So do you think that with 2.2GB of RAM per 1 TB of storage,
Orvar Korvar wrote:
ZFS does not handle 4K sector drives well, you need to create a new zpool
with 4K property (ashift) set.
http://www.solarismen.de/archives/5-Solaris-and-the-new-4K-Sector-Disks-e.g.-WDxxEARS-Part-2.html
Are there plans to allow resilver to handle 4K sector drives?
On 8/13/2010 at 8:56 PM Eric D. Mudama wrote:
|On Fri, Aug 13 at 19:06, Frank Cusack wrote:
|Interesting POV, and I agree. Most of the many distributions of
|OpenSolaris had very little value-add. Nexenta was the most
interesting
|and why should Oracle enable them to build a business at their
Freddie Cash wrote:
You definitely want to do the ZFS bits from within FreeBSD.
Why not using ZFS in OpenSolaris? At least it has most stable/tested
implementation and also the newest one if needed?
___
zfs-discuss mailing list
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Arne Jansen
Don't host 50k filesystems on a single pool. It's
more pain than it's
worth.
I assume Michael has reached this conclusion due to
factors which are not
necessary to
What operating system does it run?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Fred,
Have you read the ZFS On Disk Format Specification paper
at:
http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/ondiskformat0822.pdf?
Ifred pam wrote:
Hi Richard, thanks for your time, I really appreciate it, but I'm still unclear
on how this works.
So uberblocks
Hi,
In simple terms, the ARC is divided into a MRU and MFU side.
target size (c) = target MRU size (p) + target MFU size (c-p)
On Solaris, to get from the MRU to the MFU side, the block must be
read at least once in 62.5 milliseconds. For pure read-once workloads,
the data won't to the
Hi,
You might take a look at
http://www.osdevcon.org/2008/files/osdevcon2008-max.pdf
and
http://www.osdevcon.org/2008/files/osdevcon2008-proceedings.pdf, starting
at page 36.
Or you might just use od -x file for the file part of your assignment.
Have fun.
max
Eric D. Mudama wrote:
On Fri,
Hi Richard,
Richard Elling wrote:
On Mar 25, 2010, at 2:45 PM, John Bonomi wrote:
I'm sorry if this is not the appropriate place to ask, but I'm a student and
for an assignment I need to be able to show at the hex level how files and
their attributes are stored and referenced in ZFS. Are
c1t5d0 ONLINE 0 0 0
c1t4d0 is physically replaced LUN. then I`m trying to replace it in pool.
r...@myhost:~# zpool replace -f mypool c1t4d0
invalid vdev specification
the following errors must be manually repaired:
/dev/dsk/c1t4d0s0 is part of active ZFS pool mypool
Hi James,
I just spent about a week recovering about 10TB of file data
for someone who encountered a (somewhat?) similar problem to what you
are seeing.
If you are still having problems with this, please contact me off-list.
Regards,
max
James Risner wrote:
It was created on AMD64 FreeBSD
I was just looking to see if it is a known problem before I submit it as a bug.
What would be the best category to submit the bug under? I am not sure if it is
driver/kernel issue. I would be more than glad to help. One of the machines is
a test environment and I can run any dumps/debug
I already changed some of the drives, no difference. The target drive seem to
have random character - most likely not from the drives.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I bought a 1 TB external USB disk from Western Digital (1) and put it in my
2008.11 machine.The machine discovered the disk directly and I did a 'zpool
create xpool c11t0d0´ command
# zpool list
NAME SIZE USED AVAILCAP HEALTH ALTROOT
[...]
xpool 928G81K 928G 0%
Hi Bob,
Bob Friesenhahn wrote:
On Tue, 4 Aug 2009, Prabahar Jeyaram wrote:
You seem to be hitting :
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6586537
The fix is available in OpenSolaris build 115 and later not for
Solaris 10 yet.
It is interesting that this is a simple
Hi Ross,
Ross wrote:
#3 zfs unlike other things like the build system are extremely well
documented. There are books on it, code to read and even instructors
(Max Bruning) who can teach you about the internals. My project even
rganized a free online training for this
Again,
On Sun, Jul 12, 2009 at 7:27 PM, Cyril Pliskocyril.pli...@mountall.com wrote:
I am talking about the process, not the announcement.
What's wrong with process?
--
Kind regards, BM
Things, that are stupid at the beginning, rarely ends up wisely.
___
I only took a cursory look at the discussion below but I suspect
that vi isn't just overwriting the file.
If vi is saving a copy then doing an rm+rename thing, the ACL on
the saved file was either inherited from the parent directory
when the copy was saved or vi .attempted to copy the
Hi, Miles!
Hope, weather is fine at your place. :-)
On Sat, Jun 20, 2009 at 5:09 AM, Miles Nordin wrote:
I understood Bogdan's post was a trap: ``provide bug numbers. Oh,
they're fixed? nothing to see here then. no bugs? nothing to see
here then.''
Would be great if you do not put a words
2009/6/18 Timh Bergström timh.bergst...@diino.net:
USB-sticks has proven a bad idea with zfs mirrors
I think, USB sticks is bad idea for mirrors in general... :-)
ZFS on iSCSI *is* flaky
OK, so what is the status of your bugreport about this? Was ignored or
just rejected?..
Flaming people on
On Fri, Jun 19, 2009 at 11:16 AM, Erik Trimbleerik.trim...@sun.com wrote:
I can't say as to the entire Atom line of stuff, but I've found the Atoms
are OK for desktop use, and not anywhere powerful enough for even a basic
NAS server. The demands of wire-speed Gigabit, ZFS, and
On Wed, Jun 17, 2009 at 8:37 PM, Orvar Korvarno-re...@opensolaris.org wrote:
Ok, so you mean the comments are mostly FUD and bull shit?
Unless there is real step-by-step reproducible proof, then yes, it is
completely useless waste of time and BS that I would not care at all,
if I were you.
--
On Thu, Jun 18, 2009 at 6:42 AM, Miles Nordincar...@ivy.net wrote:
Surely you can understand there is such thing as a ``hard to reproduce
problem?'' Is the phrase so new to you? If you'd experience with
other filesystems in their corruption-prone infancy, it wouldn't be.
I understand your
On Wed, Jun 17, 2009 at 8:32 AM, Neal Pollackneal.poll...@sun.com wrote:
Not sure I understand all this concern. 32 bit can use 1.0 TB disks as data
drives. ZFS can use more than 1 disk. So if you hook up 48 of the 1.0 TB
disks
using ZFS on a 32 bit system, where is the problem?
+1.
Even
On Wed, Jun 17, 2009 at 6:58 AM, Miles Nordincar...@ivy.net wrote:
What have you done to try to reproduce the problem?
Well, if you had posted here steps that fails for you and I missed
this, then I am sorry, I would like to get this somewhere from archive
and try.
However, please don't get me
On Wed, Jun 17, 2009 at 6:58 AM, Miles Nordin car...@ivy.net wrote:
What have you done to try to reproduce the problem?
P.S. I've read that Slashdot article and all the comments and even
replied some. Plus, I've actually tried to reproduce few things that
they vaguely are able to describe. No
On Wed, Jun 17, 2009 at 11:49 AM, Brad Reeseno-re...@opensolaris.org wrote:
Yes, you may access the system via ssh. Please contact me at bar001 at uark
dot
edu and I will reply with details of how to connect.
...and then please tell us what was wrong! :-)
--
Kind regards, BM
Things, that
On Mon, Jun 15, 2009 at 12:45 PM, Andre Lueno-re...@opensolaris.org wrote:
Hi Bogdan,
I'd recommend the following RAM minimums for a fair balance of performance.
700Mb 32-bit
1Gb 64-bit
OK, it probably means 2GB when it goes actually practical. :-) Thanks!
--
Kind regards, bm
Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Shannon Fiume wrote:
I just installed 2009.06 and found that compression isn't enabled by
default when filesystems are created. Does is make sense to have an
RFE open for this? (I'll open one tonight if need be.) We keep telling
people to turn on
On Tue, Jun 16, 2009 at 2:45 AM, Orvar Korvarno-re...@opensolaris.org wrote:
According to this webpage, there are some errors that makes ZFS unusable
under certain conditions.
That is not really optimal for an Enterprise file system. In my opinion the
ZFS team should focus
on bug correction
On Mon, Jun 15, 2009 at 5:50 AM, Andre Lueno-re...@opensolaris.org wrote:
Feel free to give EON a twirl. It will only cost you CD and the time to burn
and boot it. Or if you have a VM you can test it there. You'll know reallly
fast if it has enough of a framework for you to add the missing
On Sun, May 24, 2009 at 6:11 PM, Anil Gulecha anil.ve...@gmail.com wrote:
One example is StormOS, and XFCE based distro being built on NCP2.
According to the latest blog entry.. a release is imminent. Perhaps
you'll have better desktop experience with this. (www.stormos.org)
So.Tried it just
On Sat, May 23, 2009 at 5:11 PM, Anil Gulecha anil.ve...@gmail.com wrote:
Hi Bogdan,
Which particular packages were these? RC3 is quite stable, and all
server packages are solid. If you do face issues with a particular
one, we'd appreciate a bug report. All information on this is
helpful..
On Sun, May 24, 2009 at 6:11 PM, Anil Gulecha anil.ve...@gmail.com wrote:
Yes, NCP's Desktop side components (X/Gnome/XFCE) is untested, and
explains the issues. The focus is on the core.. and it has been so
since the move from Gnusolaris (the older Gnome based Nexenta) to
Nexenta Core
On Sat, May 23, 2009 at 4:56 AM, Joe S js.li...@gmail.com wrote:
EON ZFS NAS
http://eonstorage.blogspot.com/
No idea.
NexentaCore Platform (v2.0 RC3)
http://www.nexenta.org/os/NexentaCore
Personally, I tried it few times. For now, it is still too much broken
for me yet and looks scary.
Hello, folks.
kind of problem with mirrors and raidz.
System config:
— SunOS 5.11, snv_111a
— Service system/filesystem/rmvolmgr is disabled.
Hardware:
— Asus EeePC Box B202
— Two USB 3.5 inches boxes.
If I reboot: mirror or raidz works fine. But if I connect physical USB
cables in different
Hi,
For anyone interested, I have blogged about raidz on-disk layout at:
http://mbruning.blogspot.com/2009/04/raidz-on-disk-format.html
Comments/corrections are welcome.
thanks,
max
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi,
iman habibi wrote:
Hi to All
I defined 2G for swap space in installation solaris 10 with zfs file
system,,but when i run vmstat it shows about 3G is free!.why this
happend?!
My Real memory is 4G.
Can anybody guide this,,im confused!
# vmstat 10
kthr memorypage
Hi!
Today I have ten computers with Xen and Linux, each with 2 discs of 500G in
raid1, each node sees only its own raid1 volume, I do not have live motion of
my virtual machines... and moving the data from one hypervisor to another is a
pain task...
Now that I discovered this awesome file
Heres what fixed this:
Added
tx_hcksum_enable=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
lso_enable=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
to /kernel/drv/e1000g.conf
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Here what fixed this:
Added
tx_hcksum_enable=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
lso_enable=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
to /kernel/drv/e1000g.conf
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
The client isn't losing the network. After the issue I am still able to ping
other systems and get to the web from the mac. On the other hand I cant ping
any hosts or get to the web from the server.
--
This message posted from opensolaris.org
___
I currently am sharing out zfs ISCSI volumes from a solaris server to a Mac. I
installed ZFS locally on the mac, created a local zfs pool and put a zfs
filesystem on the local volume. Can I now mount the volume on the Solaris
server and see the data?
--
This message posted from opensolaris.org
Some additional info:
I'm using the default port 3260 and native iscsitgt (not Comstar). What other
details would you like? Im running OpenSolaris 2008.11 snv_101b_rc2 X86
Thinkpad T60p laptop with 3GB of memory, a 2.1Ghz core duo proc, with 2 500gb
mirriored USB drives. I did notice that when
I am sharing out ZFS ISCSI luns to my Mac. When copying large files, the
network will hang in the middle of the transfer and the LUN will become
unavailable until I plumb the NIC. This issue appears to only occur only when I
am reading files (ie..syncing an ipod) and not writing (I'm not 100%
My son (15 years old) has installed OpenSolaris 2008.11 on disk on
his system and everything was OK until he made a newbie mistake and
edited the /etc/vfstab file incorrectly, that now prevents him from
booting. (Think he had done too much Linux...)
It just hangs on the splash screen.
My idea was
Raymond Scott wrote:
I'm very glad to see ZFS for boot available now. We have begun to use
X4150 servers and had settled on using the built-in HW RAID for mirroring
the drives in pairs. Two for Boot, two for data etc...
Is it a good idea to first create a HW RAID mirror and then install
On Qui, 2008-11-13 at 16:07 -0500, Miles Nordin wrote:
If you can find a small drive laying around, here is another option
that might work, but you could lose the whole pool due to some
miscalculation or another mistake:
1. make a new, small 1-drive zpool on the small drive
2. make a
Hi Richard,
On Qua, 2008-10-22 at 14:04 -0700, Richard Elling wrote:
It is more important to use a separate disk, than to use a separate and fast
disk. Anecdotal evidence suggests that using a USB hard disk works
well.
While I don't necessarily disagree with your statement, please note that
Hi Jeff,
On Sex, 2008-10-10 at 01:26 -0700, Jeff Bonwick wrote:
The circumstances where I have lost data have been when ZFS has not
handled a layer of redundancy. However, I am not terribly optimistic
of the prospects of ZFS on any device that hasn't committed writes
that ZFS thinks are
On Sex, 2008-10-10 at 11:23 -0700, Eric Schrock wrote:
But I haven't actually heard a reasonable proposal for what a
fsck-like tool (i.e. one that could repair things automatically) would
actually *do*, let alone how it would work in the variety of situations
it needs to (compressed RAID-Z?)
Hi Jack,
On Qui, 2008-09-11 at 15:37 -0700, Jack Dumson wrote:
Issues with ZFS and Sun Cluster
If a cluster node crashes and HAStoragePlus resource group containing
ZFS structure (ie. Zpool) is transitioned to a surviving node, the
zpool import can cause the surviving node to panic.
Toby Thain wrote:
On 27-Aug-08, at 5:47 PM, Ian Collins wrote:
Tim writes:
On Wed, Aug 27, 2008 at 3:29 PM, Ian Collins [EMAIL PROTECTED]
wrote:
Does anyone have any tuning tips for a Subversion repository on
ZFS? The
repository will mainly be storing binary (MS Office documents).
Richard Gilmore wrote:
Hello Zfs Community,
I am trying to locate if zfs has a compatible tool to Veritas's
vxbench? Any ideas? I see a tool called vdbench that looks close, but
it is not a Sun tool, does Sun recommend something to customers moving
from Veritas to ZFS and like vxbench
Michael Hale wrote:
Around 9:45 this morning, our mailserver (SunOS 5.11 snv_91 i86pc i386
i86pc) rebooted.
[...]
dumping to /dev/zvol/dsk/rootpool/dump, offset 65536, content: kernel
Is there a way to tell if ZFS caused the kernel panic? I notice that
it says imapd: in the middle of
Nathan Kroenert wrote:
For what it's worth, I started playing with USB + flash + ZFS and was
most unhappy for quite a while.
I was suffering with things hanging, going slow or just going away and
breaking, and thought I was witnessing something zfs was doing as I was
trying to do mirror
On Ter, 2008-06-03 at 23:33 +0100, Paulo Soeiro wrote:
6)Remove and attached the usb sticks:
zpool status
pool: myPool
state: UNAVAIL
status: One or more devices could not be used because the label is
missing
or invalid. There are insufficient replicas for the pool to continue
Darryl wrote:
This thread really messed me up, posts dont follow a chronological order...
so sorry for all the extra posts!
That's what you get when you don't use working tools like usenet news.
nntp for ever!!!
___
zfs-discuss mailing list
Paulo Soeiro wrote:
Greetings,
I was experimenting with zfs, and i made the following test, i shutdown
the computer during a write operation
in a mirrored usb storage filesystem.
Here is my configuration
NGS USB 2.0 Minihub 4
3 USB Silicom Power Storage Pens 1 GB each
These are the
Justin Vassallo wrote:
Thommy,
If I read correctly your post stated that the pools did not automount on
startup, not that they would go corrupt. It seems to me that Paulo is
actually experiencing a corrupt fs
Nah, I also had indications of corrupted data if you read my posts.
But the data
Leal,
The entire configuration through our corporation is being defined. One of our
team members is heavy into EMC - 200Tb is his normal operating range.
However, for this need we are focused just on local smart appliances the
purpose of which is to do more than just automatically mirror the
The important thing is to protect your data. You have lots of options here,
so we'd need to know more precisely what the other requirements are before
we could give better advice.
-- richard
Please let me come in with a parallel need, the answer to which should
contribute to this thread.
Bob Friesenhahn wrote:
On Wed, 21 May 2008, Will Murnane wrote:
So, my questions are:
* Are there options I can set server- or client-side to make Solaris
child mounts happen automatically (i.e., match the Linux behavior)?
* Will this behave with automounts? What I'd like to do is list
Can someone tell me or point me to links that describe how to
do the following.
I had a machine that crashed and I want to move to a newer machine
anyway. The boot disk on the old machine is fried. The two disks I was
using for a zfs pool on that machine need to be moved to a newer machine
Thanks Jim and Richard -
zpool import -f mypool
worked. This is a mystery to me since I had done a zpool import before
and it did not see mypool, but does now. Anyway I'm in good shape now.
Steve C.
Jim Dunham wrote:
Steve,
Can someone tell me or point me to links that describe how to
This is probably a FAQ but I have been unable to turn up the answer in
searches, thanks for your patience.
I have a zfs testbed set up with 3x 200 GB SATA drives in raidz. I pulled a
drive (ad4) and replaced it to experience the rebuild procedure. After
scrubbing/resilvering, I get the status
Say I have a raidz of 3 disks: ad4 ad6 ad8
I want to tell zfs that I am pulling ad4 and replacing it with a new disk on
the same controller, but I can't get it to release its hold on ad4.
[EMAIL PROTECTED]:/home/matt]# zpool offline pond ad4
Bringing device ad4 offline
[EMAIL
Mark Shellenbaum wrote:
Maurilio Longo wrote:
Hi,
I'm testing ZFS+CIFS server using nexenta core rc4, everything seems fine
and speed is also ok, but DOS programs don't see sub-dirs (command.com sees
them, though).
I've set casesensitivity=insensitive in the ZFS filesystem that I'm
Hi Owen,
Owen Davies wrote:
I'm not sure of the implications of Luster using ZFS DMU. Does this mean a subset of ZFS functionality, binary compatibility of written disks or what?
It means Lustre will be using ZFS (instead of ext4/ldiskfs) as its disk
storage backend in metadata and
Steve McKinty wrote:
1) First issue relates to the überblock. Updates to it are assumed to be atomic, but if the replication block size is smaller than the überblock then we can't guarantee that the whole überblock is replicated as an entity. That could in theory result in a corrupt
No boot problems. The zfs filesystems are in the same pool. What would be nice
is something like
zfs set mountorder=1 local/apps
zfs set mountorder=2 local/apps-bin
or something along that line. So one zfs filesystem can be reliably mounted to
a point inside another zfs filesystem.
This
I have been testing X4500 server and ZFS for several days.
After I destroyed the EFI label of a disk in a mirror or raidz pool,
resilvering process starts automatically.
After a few minutes, it seems that resilvering process consumes almost
all memory, and this cause system hang.
I think I may
Robert,
It's not that bad with CPU usage.
For example with RAID-Z2 while doing scrub I get
something like
800MB/s read from disks (550-600MB/s from zpool
iostat perspective)
and all four cores are mostly consumed - I get
something like 10% idle
on each cpu.
===
But in the end this would
Al,
snip
Being a friend of simplicity I was thinking about
using a pair (or more) of 3320
SCSI JBODs with multiple RAIDZ and/or RAID10 zfs
disk pools on which we'd
Have you not heard that SCSI is dead? :)
scis == slowdead, well more or less, that is
While I understand you don't want
Dear all,
we're currently looking forward to restructure our hardware environment for
our datawarehousing product/suite/solution/whatever.
We're currently running the database side on various SF V440's attached via
dual FC to our SAN backend (EMC DMX3) with UFS. The storage system is
(obviously
Hello,
I really appreciate such information, could you please give us some additional
insight regarding your statement, that [you] tried to drive ZFS to its limit,
[...]
found that the results were less consistent or predictable.
Especially when taking a closer look at the upcoming
86 matches
Mail list logo