Hi Matt,
cool, thank you for doing this!
I'll still write my script since today my two shiny new 320GB USB
disks will arrive :).
I'll add to that the feature to first send all current snapshots, then
bring down the services that depend on the filesystem, unmount the old
fs, send a final
Robert Milkowski wrote:
Hello Malachi,
Thursday, March 29, 2007, 1:36:46 AM, you wrote:
Why 2x(4G)? Hmmm. Good question. I guess I am just used to doing that
for FreeBSD. I do plan on running multiple Xen domU at the same
time... Are you thinking swap shouldn't be that big?
Robert Milkowski writes:
Hello Selim,
Wednesday, March 28, 2007, 5:45:42 AM, you wrote:
SD talking of which,
SD what's the effort and consequences to increase the max allowed block
SD size in zfs to highr figures like 1M...
Think what would happen then if you try to read 100KB
Try throttling back the max # of IOs. I saw a number of errors similar to this
on Pillar and EMC.
In /etc/system, set:
set sd:sd_max_throttle=20
and reboot.
I have added the setting and rebooted. I'm doing the same tests now
and will know in a day or so if I can avoid the error (from the
Hello Krzys,
Thursday, March 29, 2007, 2:13:26 AM, you wrote:
K Awesome, that worked great for me... I did not know I had to put c1t2d0 in
K there... but hey, it works and that is all it matters. Thank you so very
much.
At first it seems strange but when you think what would be going on if
you
Hello Matthew,
Thursday, March 29, 2007, 2:23:38 AM, you wrote:
MA Robert Milkowski wrote:
Hello zfs-discuss,
What will happen if I create a stripe pool of 3 disks, then create
somy symlinks and then overwrite one disk with 0s.
Ditto blocks should self-heal meta data so file systems will
Hello Robert,
Thursday, March 29, 2007, 11:37:48 AM, you wrote:
RM Hello Matthew,
RM Thursday, March 29, 2007, 2:23:38 AM, you wrote:
MA Robert Milkowski wrote:
Hello zfs-discuss,
What will happen if I create a stripe pool of 3 disks, then create
somy symlinks and then overwrite one disk
Hello Matthew,
Thursday, March 29, 2007, 3:29:36 AM, you wrote:
MA Constantin Gonzalez wrote:
What is the most elegant way of migrating all filesystems to the new pool,
including snapshots?
Can I do a master snapshot of the whole pool, including sub-filesystems and
their snapshots, then
Unfortunately we don't have experience with NexSAN.
HDS are quite conservative and with a value of 8 we run quite stable (with UFS).
Also we found that value appropiate for old HP EMA arrays (old units but very
very reliable! Digital products were rocks)
gino
This message posted from
Hi everyone,
Sorry for crossposting but it seems I have stumbled upon a problem
that affects both. I have a V490 running Solaris 10u3 with a 16x750GB
raid array connected to it. I've created an 8TB zfs filesystem called
data1 and created a zfs filesystem called data1/zones mounted to
/zones.
Hi Jim,
that's absolutely great, respect.
Where is it possible to get more infos about what you have done so far?
to rebuild that for a own try?
cheers
Jens
This message posted from opensolaris.org
___
zfs-discuss mailing list
Hello storage-discuss,
First - I'm aware of Proposal: ZFS hotplug support and
autoconfiguration by Eric Schrock.
I have presented each physical disk from EMC CX3-40 as a LUN and then
created RAID-10 using zfs. All devices are under MPxIO, system is
S10U3+patches (x64).
Now I removed physically
I suppose what would have been nice to see, architecturally,
was a way to transform data at some part in the pipeline and
to be able to specify various types of transforms, be they
compression, encryption or something else. But maybe I'm
just dreaming without understanding the complexities of
On Thu, Mar 29, 2007 at 01:18:31PM +0300, Niclas Sodergard wrote:
Sorry for crossposting but it seems I have stumbled upon a problem
that affects both. I have a V490 running Solaris 10u3 with a 16x750GB
raid array connected to it. I've created an 8TB zfs filesystem called
data1 and created a
Hi all,
On Wed, 2007-03-28 at 14:23 -0700, Lin Ling wrote:
We will make the manual and netinstall instructions available to
non-SWAN folks shortly.
Tim Foster also has a script to do the set up, wait for his blog.
Just put that blog post up - you can find it at
On 3/29/07, Ed Plese [EMAIL PROTECTED] wrote:
Is there a solution here but to move the zone root to a smaller disk?
Set a quota (10G should work just fine) on the filesystem and then
perform the zone install. Afterwards remove the quota.
Thanks, seems to work just fine. It solved my
Yes, running Solaris as Dom0.
I will be letting each domain owner choose their own OS for their DomU.
I will also be running some unmodified DomU (like JNODE), which is why I
bought a system with AMD-V.
Malachi
On 3/29/07, Chris Beal [EMAIL PROTECTED] wrote:
Robert Milkowski wrote:
Hello
I did `zpool create data raidz2 c3d0 c4d0 c5d0 c6d0 c7d0`
`zpool list` says data has 1.13T avaialble
`zfs list` says data has 680G available
Malachi
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Niclas Sodergard wrote:
Hi everyone,
Sorry for crossposting but it seems I have stumbled upon a problem
that affects both. I have a V490 running Solaris 10u3 with a 16x750GB
raid array connected to it. I've created an 8TB zfs filesystem called
data1 and created a zfs filesystem called
On 3/29/07, Jerry Jelinek [EMAIL PROTECTED] wrote:
# zoneadm -z mytest install
zoneadm: /zones/mytest: Value too large for defined data type
could not verify zonepath /zones/mytest because of the above errors.
zoneadm: zone mytest failed to verify
While this doesn't help with your
Since `zpool list` shows a SIZE=1.13T (which I assume is also how much space
exists in the pool), it seems AVAIL=1.13T should work more like `zfs list`
does and show how much is actually avaialble.
Just my 2 cents, but whether it is working correctly or not, behaving
differently makes it appear
On Wed, 28 Mar 2007, prasad wrote:
We create iso images of our product in the
following way (high-level):
# mkfile 3g /isoimages/myiso
# lofiadm -a /isoimages/myiso
/dev/lofi/1
# newfs /dev/rlofi/1
# mount /dev/lofi/1 /mnt
# cd /mnt; zcat /product/myproduct.tar.Z | tar xf -
We will make the manual and netinstall instructions available to
non-SWAN folks shortly.
The manual instruction is available at
http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/
We are still working on the Netinstall/DVD binary/setup kit.
Will send out a notice
On 3/29/07, Bill Sommerfeld [EMAIL PROTECTED] wrote:
On Thu, 2007-03-29 at 17:07 +0300, Niclas Sodergard wrote:
On 3/29/07, Jerry Jelinek [EMAIL PROTECTED] wrote:
# zoneadm -z mytest install
zoneadm: /zones/mytest: Value too large for defined data type
could not verify zonepath
Hi,
Is it possible to take file level snapshots in ZFS? Suppose I want to
keep a version of the file before writing new data to it, how do I do
that? My goal would be to rollback the file to earlier version (i.e.
discard the new changes) depending upon a policy. I would like to
keep only 1
On Thu, Mar 29, 2007 at 11:52:56PM +0530, Atul Vidwansa wrote:
Is it possible to take file level snapshots in ZFS? Suppose I want to
keep a version of the file before writing new data to it, how do I do
that? My goal would be to rollback the file to earlier version (i.e.
discard the new
Atul Vidwansa wrote:
Hi,
Is it possible to take file level snapshots in ZFS? Suppose I want to
keep a version of the file before writing new data to it, how do I do
that? My goal would be to rollback the file to earlier version (i.e.
discard the new changes) depending upon a policy. I would
Hi Richard,
I am not talking about source(ASCII) files. How about versioning
production data? I talked about file level snapshots because
snapshotting entire filesystem does not make sense when application is
changing just few files at a time.
Regards,
-atul
On 3/30/07, Richard Elling [EMAIL
Atul Vidwansa wrote:
Hi Richard,
I am not talking about source(ASCII) files. How about versioning
production data? I talked about file level snapshots because
snapshotting entire filesystem does not make sense when application is
changing just few files at a time.
CVS supports binary files.
Just to clarify,
If we wanted to do this with ZFS as the entire disk (where we currently have
UFS), would we copy everything over (the cpio bit) to an uninvolved
filesystem, then do the zpool on the disks that used to have UFS (and the
one we are mirroring to) -- thus destroying the original UFS
On 29/03/07, Atul Vidwansa [EMAIL PROTECTED] wrote:
Hi Richard,
I am not talking about source(ASCII) files. How about versioning
production data? I talked about file level snapshots because
snapshotting entire filesystem does not make sense when application is
changing just few files at a
On 3/30/07, Shawn Walker [EMAIL PROTECTED] wrote:
On 29/03/07, Atul Vidwansa [EMAIL PROTECTED] wrote:
Hi Richard,
I am not talking about source(ASCII) files. How about versioning
production data? I talked about file level snapshots because
snapshotting entire filesystem does not make
Robert Milkowski wrote:
2. MPxIO - it tries to failover disk to second SP but looks like it
tries it forever (or very very long). After some time it should
have generated disk IO failure...
Are there any other hosts connected to this storage array? It looks like
there might be an
On Thu, 29 Mar 2007, Shawn Walker wrote:
On 29/03/07, Wee Yeh Tan [EMAIL PROTECTED] wrote:
On 3/30/07, Shawn Walker [EMAIL PROTECTED] wrote:
On 29/03/07, Atul Vidwansa [EMAIL PROTECTED] wrote:
Hi Richard,
I am not talking about source(ASCII) files. How about versioning
On 3/30/07, Shawn Walker [EMAIL PROTECTED] wrote:
Actually, recent version control systems can be very efficient at
storing binary files.
Still no where as efficient as a ZFS snapshot.
Careful consideration of the layout of your file
system applies regardless of which type of file system it
On 3/30/07, Wee Yeh Tan [EMAIL PROTECTED] wrote:
Careful consideration of the layout of your file
system applies regardless of which type of file system it is (zfs,
ufs, etc.).
True. ZFS does open up a whole new can of worms/flexibility.
How do hard-links work across zfs
On 3/30/07, Nicholas Lee [EMAIL PROTECTED] wrote:
How do hard-links work across zfs mount/filesystems in the same pool?
No.
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/vnode.c#1322
My guess is that it should be technically possible in the same pool
though but
On 29/03/07, Wee Yeh Tan [EMAIL PROTECTED] wrote:
On 3/30/07, Shawn Walker [EMAIL PROTECTED] wrote:
Actually, recent version control systems can be very efficient at
storing binary files.
Still no where as efficient as a ZFS snapshot.
Maybe, but they're far better at doing versioning and
Lets say I reorganized my zpools. Now there are 2 pools:
Pool1:
Production data, combination of binary and text files. Only few files
change at a time. Average file sizes are around 1MB. Does it make
sense to take zfs snapshots of the pool? Will the snapshot consume as
much space as original
On 3/30/07, Shawn Walker [EMAIL PROTECTED] wrote:
Maybe, but they're far better at doing versioning and providing a
history of changes.
I;d have to agree. I track 6000 blobs (OOo gzip files, pdfs and other stuff)
in svn even with 1300 changesets over 3 years there is a marginal disk cost
on
On March 29, 2007 1:18:31 PM +0300 Niclas Sodergard [EMAIL PROTECTED]
wrote:
Anyway, I then create a new sparse zone with the root in
/zones/mytest. It looks like this (just a barebone setup)
I thought zone root on zfs was not supported.
-frank
___
On 3/30/07, Atul Vidwansa [EMAIL PROTECTED] wrote:
Lets say I reorganized my zpools. Now there are 2 pools:
Pool1:
Production data, combination of binary and text files. Only few files
change at a time. Average file sizes are around 1MB. Does it make
sense to take zfs snapshots of the pool?
However, even with sequential writes, a large I/O size makes a huge difference
in throughput. Ask the QFS folks about data capture applications. ;-)
(This is less true with ATA disks that tend to have less buffering and much
less sophisticated architectures. I'm not aware of any dual-processor
43 matches
Mail list logo