However, the zfs-discuss list seems to be archived at gmane.
On 2013-03-22 22:57, Cindy Swearingen wrote:
I hope to see everyone on the other side...
***
The ZFS discussion list is moving to java.net.
This opensolaris/zfs discussion will not be available
Hi,
I have Dell md1200 connected to two heads ( Dell R710 ). The heads have
Perc H800 card and drives are configured in Raid0 ( Virtual Disk) in the
RAID controller.
One of the drives had crashed and is replaced by a spare. Resilvering was
triggered but fails to complete due to drives going
Hello all,
I have a kind of lame question here: how can I force the system (OI)
to probe all the HDD controllers and disks that it can find, and be
certain that it has searched everywhere for disks?
My remotely supported home-NAS PC was unavailable for a while, and
a friend rebooted it for
I hope to see everyone on the other side...
***
The ZFS discussion list is moving to java.net.
This opensolaris/zfs discussion will not be available after March 24.
There is no way to migrate the existing list to the new list.
The solaris-zfs project is
mail-archive.com is an independent third party.
This is one of their FAQ's
http://www.mail-archive.com/faq.html#duration
The Mail Archive has been running since 1998. Archiving services are planned to
continue indefinitely. We do not plan on ever needing to remove archived
material. Do not,
Hi,
Can I know how to configure a SSD to be used for L2arc ? Basically I want
to improve read performance.
To increase write performance, will SSD for Zil help ? As I read on forums,
Zil is only used for mysql/transaction based writes. I have regular writes
only.
Thanks.
Regards,
Ram
Can I know how to configure a SSD to be used for L2arc ? Basically I want to
improve read performance.
Read the documentation, specifically the section titled;
Creating a ZFS Storage PoolWith Cache Devices
To increase write performance, will SSD for Zil help ? As I read on forums,
Zil
On 2013-03-21 16:24, Ram Chander wrote:
Hi,
Can I know how to configure a SSD to be used for L2arc ? Basically I
want to improve read performance.
The man zpool page is quite informative on theory and concepts ;)
If your pool already exists, you can prepare the SSD (partition/slice
it) and:
I have two identical Supermicro boxes with 32GB ram. Hardware details at
the end of the message.
They were running OI 151.a.5 for months. The zpool configuration was one
storage zpool with 3 vdevs of 8 disks in RAIDZ2.
The OI installation is absolutely clean. Just next-next-next until done.
All
Peter,
sorry if this is so obvious that you didn't mention it: Have you checked
/var/adm/messages and other diagnostic tool output?
regards
Michael
On Wed, Mar 20, 2013 at 4:34 PM, Peter Wood peterwood...@gmail.com wrote:
I have two identical Supermicro boxes with 32GB ram. Hardware details
Does the Supermicro IPMI show anything when it crashes? Does anything show
up in event logs in the BIOS, or in system logs under OI?
On Wed, Mar 20, 2013 at 11:34 AM, Peter Wood peterwood...@gmail.com wrote:
I have two identical Supermicro boxes with 32GB ram. Hardware details at
the end of
I'm sorry. I should have mentioned it that I can't find any errors in the
logs. The last entry in /var/adm/messages is that I removed the keyboard
after the last reboot and then it shows the new boot up messages when I
boot up the system after the crash. The BIOS log is empty. I'm not sure how
to
How about crash dumps?
michael
On Wed, Mar 20, 2013 at 4:50 PM, Peter Wood peterwood...@gmail.com wrote:
I'm sorry. I should have mentioned it that I can't find any errors in the
logs. The last entry in /var/adm/messages is that I removed the keyboard
after the last reboot and then it shows
I'm going to need some help with the crash dumps. I'm not very familiar
with Solaris.
Do I have to enable something to get the crash dumps? Where should I look
for them?
Thanks for the help.
On Wed, Mar 20, 2013 at 8:53 AM, Michael Schuster michaelspriv...@gmail.com
wrote:
How about crash
On 2013-03-20 17:15, Peter Wood wrote:
I'm going to need some help with the crash dumps. I'm not very familiar
with Solaris.
Do I have to enable something to get the crash dumps? Where should I
look for them?
Typically the kernel crash dumps are created as a result of kernel
panic; also they
Hi Jim,
Thanks for the pointers. I'll definitely look into this.
--
Peter Blajev
IT Manager, TAAZ Inc.
Office: 858-597-0512 x125
On Wed, Mar 20, 2013 at 11:29 AM, Jim Klimov jimkli...@cos.ru wrote:
On 2013-03-20 17:15, Peter Wood wrote:
I'm going to need some help with the crash dumps.
No problem Trey. Anything will help.
Yes, I did a clean install overwriting the old OS.
Just to make sure, you actually did an overwrite reinstall with OI151a7
rather than upgrading the existing OS images? If you did a pkg
image-update, you should be able to boot back into the oi151a5
On Wed, Mar 20, 2013 at 08:50:40AM -0700, Peter Wood wrote:
I'm sorry. I should have mentioned it that I can't find any errors in the
logs. The last entry in /var/adm/messages is that I removed the keyboard
after the last reboot and then it shows the new boot up messages when I
boot
Great write up Jens.
The chance of two MB to be broken is probably low but overheating is a very
good point. It was on my to-do list to setup IPMI and seems that now is the
best time to do it.
Thanks
On Wed, Mar 20, 2013 at 1:08 PM, Jens Elkner jel+...@cs.uni-magdeburg.dewrote:
On Wed, Mar
I can't seem to find any factual indication that opensolaris.org mailing lists
are going away, and I can't even find the reference to whoever said it was EOL
in a few weeks ... a few weeks ago.
So ... are these mailing lists going bye-bye?
___
Hi Ned,
This list is migrating to java.net and will not be available
in its current form after March 24, 2013.
The archive of this list is available here:
http://www.mail-archive.com/zfs-discuss@opensolaris.org/
I will provide an invitation to the new list shortly.
Thanks for your patience.
Will the archives of all the lists be preserved? I don't think we've seen a
clear answer on that (it's possible you haven't, either!).
On Wed, Mar 20, 2013 at 2:14 PM, Cindy Swearingen
cindy.swearin...@oracle.com wrote:
Hi Ned,
This list is migrating to java.net and will not be available
in
I can reproduce the problem. I can crash the system.
Here are the steps I did (some steps may not be needed but I haven't tested
it):
- Clean install of OI 151.a.7 on Supermicro hardware described above (32GB
RAM though, not the 128GB)
- Create 1 zpool, 6 raidz vdevs with 5 drives each
- NFS
Hi Everyone,
The ZFS discussion list is moving to java.net.
This opensolaris/zfs discussion will not be available after March 24.
There is no way to migrate the existing list to the new list.
The solaris-zfs project is here:
http://java.net/projects/solaris-zfs
See the steps below to join
Andrew Werchowiecki wrote:
Thanks for the info about slices, I may give that a go later on. I’m
not keen on that because I have clear evidence (as in zpools set up
this way, right now, working, without issue) that GPT partitions of
the style shown above work and I want to see why it doesn’t
Hi Andrew,
Your original syntax was incorrect.
A p* device is a larger container for the d* device or s* devices.
In the case of a cache device, you need to specify a d* or s* device.
That you can add p* devices to a pool is a bug.
Adding different slices from c25t10d1 as both log and cache
Hi Hans,
Start with the ZFS Admin Guide, here:
http://docs.oracle.com/cd/E26502_01/html/E29007/index.html
Or, start with your specific questions.
Thanks, Cindy
On 03/19/13 03:30, Hans J. Albertsson wrote:
as used on Illumos?
I've seen a few tutorials written by people who obviously are
There are links to videos and other materials here:
http://wiki.smartos.org/display/DOC/ZFS
Not as organized as I'd like...
On Tue, Mar 19, 2013 at 2:30 AM, Hans J. Albertsson
hans.j.alberts...@branneriet.se wrote:
as used on Illumos?
I've seen a few tutorials written by people who
Andrew Werchowiecki wrote:
Total disk size is 9345 cylinders
Cylinder size is 12544 (512 byte) blocks
Cylinders
Partition StatusType Start End Length%
= ==
On 2013-03-19 20:38, Cindy Swearingen wrote:
Hi Andrew,
Your original syntax was incorrect.
A p* device is a larger container for the d* device or s* devices.
In the case of a cache device, you need to specify a d* or s* device.
That you can add p* devices to a pool is a bug.
I disagree; at
On 03/19/13 20:27, Jim Klimov wrote:
I disagree; at least, I've always thought differently:
the d device is the whole disk denomination, with a
unique number for a particular controller link (c+t).
The disk has some partitioning table, MBR or GPT/EFI.
In these tables, partition p0 stands for
On 2013-03-19 22:07, Andrew Gabriel wrote:
The GPT partitioning spec requires the disk to be FDISK
partitioned with just one single FDISK partition of type EFI,
so that tools which predate GPT partitioning will still see
such a GPT disk as fully assigned to FDISK partitions, and
therefore less
You could always use 40-gigabit between the two storage systems which would
speed things dramatically, or back to back 56-gigabit IB.
From: zfs-discuss-requ...@opensolaris.org
Sent: Monday, March 18, 2013 11:01 PM
To: zfs-discuss@opensolaris.org
Subject:
I did something like the following:
format -e /dev/rdsk/c5t0d0p0
fdisk
1 (create)
F (EFI)
6 (exit)
partition
label
1
y
0
usr
wm
64
4194367e
1
usr
wm
4194368
117214990
label
1
y
Total disk size is 9345 cylinders
Cylinder size is 12544 (512 byte) blocks
On Sun, Mar 17, 2013 at 1:01 PM, Andrew Werchowiecki
andrew.werchowie...@xpanse.com.au wrote:
I understand that p0 refers to the whole disk... in the logs I pasted in
I'm not attempting to mount p0. I'm trying to work out why I'm getting an
error attempting to mount p2, after p1 has
On 03/16/2013 12:57 AM, Richard Elling wrote:
On Mar 15, 2013, at 6:09 PM, Marion Hakanson hakan...@ohsu.edu wrote:
So, has anyone done this? Or come close to it? Thoughts, even if you
haven't done it yourself?
Don't forget about backups :-)
-- richard
Transferring 1 PB over a 10
I know it's heresy these days, but given the I/O throughput you're looking for
and the amount you're going to spend on disks, a T5-2 could make sense when
they're released (I think) later this month.
Crucial sells RAM they guarantee for use in SPARC T-series, and since you're at
an edu the
hakan...@ohsu.edu said:
I get a little nervous at the thought of hooking all that up to a single
server, and am a little vague on how much RAM would be advisable, other than
as much as will fit (:-). Then again, I've been waiting for something
like
pNFS/NFSv4.1 to be usable for gluing
On Sat, 16 Mar 2013, Kristoffer Sheather @ CloudCentral wrote:
Well, off the top of my head:
2 x Storage Heads, 4 x 10G, 256G RAM, 2 x Intel E5 CPU's
8 x 60-Bay JBOD's with 60 x 4TB SAS drives
RAIDZ2 stripe over the 8 x JBOD's
That should fit within 1 rack comfortably and provide 1 PB
On 2013-03-16 15:20, Bob Friesenhahn wrote:
On Sat, 16 Mar 2013, Kristoffer Sheather @ CloudCentral wrote:
Well, off the top of my head:
2 x Storage Heads, 4 x 10G, 256G RAM, 2 x Intel E5 CPU's
8 x 60-Bay JBOD's with 60 x 4TB SAS drives
RAIDZ2 stripe over the 8 x JBOD's
That should fit
On 2013-03-16 15:20, Bob Friesenhahn wrote:
On Sat, 16 Mar 2013, Kristoffer Sheather @ CloudCentral wrote:
Well, off the top of my head:
2 x Storage Heads, 4 x 10G, 256G RAM, 2 x Intel E5 CPU's
8 x 60-Bay JBOD's with 60 x 4TB SAS drives
RAIDZ2 stripe over the 8 x JBOD's
That should fit
On Sat, Mar 16, 2013 at 2:27 PM, Jim Klimov jimkli...@cos.ru wrote:
On 2013-03-16 15:20, Bob Friesenhahn wrote:
On Sat, 16 Mar 2013, Kristoffer Sheather @ CloudCentral wrote:
Well, off the top of my head:
2 x Storage Heads, 4 x 10G, 256G RAM, 2 x Intel E5 CPU's
8 x 60-Bay JBOD's with 60
I just recently built an OpenIndiana 151a7 system that is currently 1/2 PB
that will be expanded to 1 PB as we collect imaging data for the Human
Connectome Project at Washington University in St. Louis. It is very much
like your use case as this is an offsite backup system that will write once
It's a home set up, the performance penalty from splitting the cache devices is
non-existant, and that work around sounds like some pretty crazy amount of
overhead where I could instead just have a mirrored slog.
I'm less concerned about wasted space, more concerned about amount of SAS ports
I
On Mar 16, 2013, at 7:01 PM, Andrew Werchowiecki
andrew.werchowie...@xpanse.com.au wrote:
It's a home set up, the performance penalty from splitting the cache devices
is non-existant, and that work around sounds like some pretty crazy amount of
overhead where I could instead just have a
Andrew Werchowiecki wrote:
Hi all,
I'm having some trouble with adding cache drives to a zpool, anyone
got any ideas?
muslimwookie@Pyzee:~$ sudo zpool add aggr0 cache c25t10d1p2
Password:
cannot open '/dev/dsk/c25t10d1p2': I/O error
muslimwookie@Pyzee:~$
I have two SSDs in the system,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Andrew Werchowiecki
muslimwookie@Pyzee:~$ sudo zpool add aggr0 cache c25t10d1p2
Password:
cannot open '/dev/dsk/c25t10d1p2': I/O error
muslimwookie@Pyzee:~$
I have two SSDs in the
Thanks for the info. I am planning g the install this weekend, between
formula one and other hardware upgrades... fingers crossed it works!
On 14 Mar 2013 09:19, Heiko L. h.lehm...@hs-lausitz.de wrote:
support for VT, but nothing for AMD... The Opterons dont have VT, so i
wont
be using XEN,
Greetings,
Has anyone out there built a 1-petabyte pool? I've been asked to look
into this, and was told low performance is fine, workload is likely
to be write-once, read-occasionally, archive storage of gene sequencing
data. Probably a single 10Gbit NIC for connectivity is sufficient.
We've
On Fri, Mar 15, 2013 at 06:09:34PM -0700, Marion Hakanson wrote:
Greetings,
Has anyone out there built a 1-petabyte pool? I've been asked to look
into this, and was told low performance is fine, workload is likely
to be write-once, read-occasionally, archive storage of gene sequencing
Well, off the top of my head:
2 x Storage Heads, 4 x 10G, 256G RAM, 2 x Intel E5 CPU's
8 x 60-Bay JBOD's with 60 x 4TB SAS drives
RAIDZ2 stripe over the 8 x JBOD's
That should fit within 1 rack comfortably and provide 1 PB storage..
Regards,
Kristoffer Sheather
Cloud Central
Scale Your Data
Actually, you could use 3TB drives and with a 6/8 RAIDZ2 stripe achieve
1080 TB usable.
You'll also need 8-16 x SAS ports available on each storage head to provide
redundant multi-pathed SAS connectivity to the JBOD's, recommend LSI
9207-8E's for those and Intel X520-DA2's for the 10G NIC's.
On Fri, Mar 15, 2013 at 7:09 PM, Marion Hakanson hakan...@ohsu.edu wrote:
Has anyone out there built a 1-petabyte pool?
I'm not advising against your building/configuring a system yourself,
but I suggest taking look at the Petarack:
http://www.aberdeeninc.com/abcatg/petarack.htm
It shows it's
rvandol...@esri.com said:
We've come close:
admin@mes-str-imgnx-p1:~$ zpool list
NAME SIZE ALLOC FREECAP DEDUP HEALTH ALTROOT
datapool 978T 298T 680T30% 1.00x ONLINE -
syspool278G 104G 174G37% 1.00x ONLINE -
Using a Dell R720 head unit, plus a
On Fri, Mar 15, 2013 at 06:31:11PM -0700, Marion Hakanson wrote:
rvandol...@esri.com said:
We've come close:
admin@mes-str-imgnx-p1:~$ zpool list
NAME SIZE ALLOC FREECAP DEDUP HEALTH ALTROOT
datapool 978T 298T 680T30% 1.00x ONLINE -
syspool278G 104G
Ray said:
Using a Dell R720 head unit, plus a bunch of Dell MD1200 JBODs dual pathed
to a couple of LSI SAS switches.
Marion said:
How many HBA's in the R720?
Ray said:
We have qty 2 LSI SAS 9201-16e HBA's (Dell resold[1]).
Sounds similar in approach to the Aberdeen product another sender
On Mar 15, 2013, at 6:09 PM, Marion Hakanson hakan...@ohsu.edu wrote:
Greetings,
Has anyone out there built a 1-petabyte pool?
Yes, I've done quite a few.
I've been asked to look
into this, and was told low performance is fine, workload is likely
to be write-once, read-occasionally,
support for VT, but nothing for AMD... The Opterons dont have VT, so i wont
be using XEN, but the Zones may be useful...
We use XEN/PV on X4200 for many years without problems.
dom0: X4200+openindiana+xvm
guests(PV): openindiana,linux/fedora,linux/debian
On 2013-03-11 21:50, Bob Friesenhahn wrote:
On Mon, 11 Mar 2013, Tiernan OToole wrote:
I know this might be the wrong place to ask, but hopefully someone can
point me in the right direction...
I got my hands on a Sun x4200. Its the original one, not the M2, and
has 2 single core Opterons, 4Gb
On Mar 14, 2013, at 5:55 PM, Jim Klimov jimkli...@cos.ru wrote:
However, recently the VM virtual hardware clocks became way slow.
Does NTP help correct the guest's clock?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 2013-03-15 01:58, Gary Driggs wrote:
On Mar 14, 2013, at 5:55 PM, Jim Klimov jimkli...@cos.ru wrote:
However, recently the VM virtual hardware clocks became way slow.
Does NTP help correct the guest's clock?
Unfortunately no, neither guest NTP, ntpdate or rdate in crontabs,
nor
Hi all,
I'm having some trouble with adding cache drives to a zpool, anyone got any
ideas?
muslimwookie@Pyzee:~$ sudo zpool add aggr0 cache c25t10d1p2
Password:
cannot open '/dev/dsk/c25t10d1p2': I/O error
muslimwookie@Pyzee:~$
I have two SSDs in the system, I've created an 8gb partition on
I know this might be the wrong place to ask, but hopefully someone can
point me in the right direction...
I got my hands on a Sun x4200. Its the original one, not the M2, and has 2
single core Opterons, 4Gb RAM and 4 73Gb SAS Disks... But, I dont know what
to install on it... I was thinking of
On Mon, 11 Mar 2013, Tiernan OToole wrote:
I know this might be the wrong place to ask, but hopefully someone can point me
in the right direction...
I got my hands on a Sun x4200. Its the original one, not the M2, and has 2
single core Opterons, 4Gb RAM and 4 73Gb SAS Disks...
But, I dont
to tell you the truth, i dont really need the virtualization stuff... Zones
sounds interesting, since it seems to be ligher weight than Xen or anything
like that...
On Mon, Mar 11, 2013 at 8:50 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Mon, 11 Mar 2013, Tiernan OToole wrote:
On Tue, 5 Mar 2013, Ed Shipe wrote:
On 2 different OpenIndiana 151a7 systems, Im showing a huge number of Illegal
Requests. There are no other apparent
issues, performance is fine, etc,etc.Everything works great - what are these
illegal requests? My Google-Foo is
failing me...
My system
We do the same for all of our legacy operating system backups.
Take
a snapshot then do an rsync and an excellent way of maintaining
incremental backups for those.
Magic rsync options used:
-a --inplace --no-whole-file --delete-excluded
This causes rsync to overwrite the file
On Mon, 4 Mar 2013, Matthew Ahrens wrote:
Magic rsync options used:
-a --inplace --no-whole-file --delete-excluded
This causes rsync to overwrite the file blocks in place rather than writing
to a new temporary file first. As a result, zfs COW produces primitive
deduplication of at least
On Tue, March 5, 2013 10:02, Bob Friesenhahn wrote:
Rsync does need to read files on the destination filesystem to see if
they have changed. If the system has sufficient RAM (and/or L2ARC)
then files may still be cached from the previous day's run. In most
cases only a small subset of the
On 3/5/2013 9:40 AM, David Magda wrote:
On Tue, March 5, 2013 10:02, Bob Friesenhahn wrote:
Rsync does need to read files on the destination filesystem to see if
they have changed. If the system has sufficient RAM (and/or L2ARC)
then files may still be cached from the previous day's run. In
On Tue, 5 Mar 2013, David Magda wrote:
It's also possible to reduce the amount that rsync has to walk the entire
file tree.
Most folks simply do a rsync --options /my/source/ /the/dest/, but if
you use zfs diff, and parse/feed the output of that to rsync, then the
amount of thrashing can
On 3/5/2013 10:27 AM, Bob Friesenhahn wrote:
On Tue, 5 Mar 2013, David Magda wrote:
It's also possible to reduce the amount that rsync has to walk the
entire
file tree.
Most folks simply do a rsync --options /my/source/ /the/dest/, but if
you use zfs diff, and parse/feed the output of that to
On Tue, March 5, 2013 11:17, Russ Poyner wrote:
Your idea to use zfs diff to limit the need to stat the entire
filesystem tree intrigues me. My current rsync backups are normally
limited by this very factor. It takes longer to walk the filesystem tree
than it does to transfer the new data.
On Tue, Feb 26, 2013 at 7:42 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Wed, 27 Feb 2013, Ian Collins wrote:
I am finding that rsync with the right options (to directly
block-overwrite) plus zfs snapshots is providing me with pretty
amazing deduplication for backups without
On 2 different OpenIndiana 151a7 systems, Im showing a huge number of
Illegal Requests. There are no other apparent issues, performance is fine,
etc,etc.
Everything works great - what are these illegal requests? My Google-Foo is
failing me...
Thanks,
-ed
root@NAPP1:~# iostat -Ensr
c6t0d0
On 02/26/13 20:30, Morris Hooten wrote:
Besides copying data from /dev/md/dsk/x volume manager filesystems to
new zfs filesystems
does anyone know of any zfs conversion tools to make the
conversion/migration from svm to zfs
easier?
With Solaris 11 you can use shadow migration, it is really a
How is the quality of the ZFS Linux port today? Is it comparable to Illumos
or at least FreeBSD ? Can I trust production data to it ?
On Wed, Feb 27, 2013 at 5:22 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Tue, 26 Feb 2013, Gary Driggs wrote:
On Feb 26, 2013, at 12:44 AM,
On 02/27/2013 12:32 PM, Ahmed Kamal wrote:
How is the quality of the ZFS Linux port today? Is it comparable to Illumos
or at least FreeBSD ? Can I trust production data to it ?
Can't speak from personal experience, but a colleague of mine has been
PPA builds on Ubuntu and has had, well, less
I've been using it since rc13. It's been stable for me as long as you don't
get into things like zvols and such...
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Sašo Kiselkov
Sent: Wednesday, February 27, 2013
On Wed, 27 Feb 2013, Ian Collins wrote:
Magic rsync options used:
-a --inplace --no-whole-file --delete-excluded
This causes rsync to overwrite the file blocks in place rather than
writing to a new temporary file first. As a result, zfs COW produces
primitive deduplication of at least the
On Wed, Feb 27, 2013 at 2:57 AM, Dan Swartzendruber dswa...@druber.comwrote:
I've been using it since rc13. It's been stable for me as long as you
don't
get into things like zvols and such...
Then it definitely isn't at the level of FreeBSD, and personally I would
not consider that
On 2/27/2013 2:05 PM, Tim Cook wrote:
On Wed, Feb 27, 2013 at 2:57 AM, Dan Swartzendruber
dswa...@druber.com mailto:dswa...@druber.com wrote:
I've been using it since rc13. It's been stable for me as long as
you don't
get into things like zvols and such...
Then it
Hi Darren. you're right! With solaris 11 and shadow migration feature it's
fantastic.
Not sure which Solaria vers we are talking about here.
Alfredo
On Wed, Feb 27, 2013 at 10:22 PM, Darren J Moffat
darr...@opensolaris.orgwrote:
On 02/26/13 20:30, Morris Hooten wrote:
Besides copying
Thanks all! I will check out FreeNAS and see what it can do... I will also
check my RAID Card and see if it can work with JBOD... fingers crossed...
The machine has a couple internal SATA ports (think there are 2, could be
4) so i was thinking of using those for boot disks and SSDs later...
As a
On 02/26/2013 09:33 AM, Tiernan OToole wrote:
As a follow up question: Data Deduplication: The machine, to start, will
have about 5Gb RAM. I read somewhere that 20TB storage would require about
8GB RAM, depending on block size...
The typical wisdom is that 1TB of dedup'ed data = 1GB of RAM.
On Mon, Feb 25, 2013 at 10:33 PM, Tiernan OToole lsmart...@gmail.comwrote:
Thanks all! I will check out FreeNAS and see what it can do... I will also
check my RAID Card and see if it can work with JBOD... fingers crossed...
The machine has a couple internal SATA ports (think there are 2, could
Thanks again lads. I will take all that info into advice, and will join
that new group also!
Thanks again!
--Tiernan
On Tue, Feb 26, 2013 at 8:44 AM, Tim Cook t...@cook.ms wrote:
On Mon, Feb 25, 2013 at 10:33 PM, Tiernan OToole lsmart...@gmail.comwrote:
Thanks all! I will check out
Solaris 11.1 (free for non-prod use).
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Tiernan OToole
Sent: 25 February 2013 14:58
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] ZFS Distro Advice
Good morning all.
My home
for what is worth..
I had the same problem and found the answer here -
http://forums.freebsd.org/showthread.php?t=27207
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Feb 26, 2013, at 12:44 AM, Sašo Kiselkov wrote:
I'd also recommend that you go and subscribe to z...@lists.illumos.org, since
this list is going to get shut down by Oracle next month.
Whose description still reads, everything ZFS running on illumos-based
distributions.
-Gary
On 02/26/2013 03:51 PM, Gary Driggs wrote:
On Feb 26, 2013, at 12:44 AM, Sašo Kiselkov wrote:
I'd also recommend that you go and subscribe to z...@lists.illumos.org, since
this list is going to get shut down by Oracle next month.
Whose description still reads, everything ZFS running on
On Tue, Feb 26, 2013 at 06:51:08AM -0800, Gary Driggs wrote:
On Feb 26, 2013, at 12:44 AM, Sašo Kiselkov wrote:
I'd also recommend that you go and subscribe to z...@lists.illumos.org, since
I can't seem to find this list. Do you have an URL for that?
Mailman, hopefully?
this list is going
On 02/26/2013 05:57 PM, Eugen Leitl wrote:
On Tue, Feb 26, 2013 at 06:51:08AM -0800, Gary Driggs wrote:
On Feb 26, 2013, at 12:44 AM, Sašo Kiselkov wrote:
I'd also recommend that you go and subscribe to z...@lists.illumos.org, since
I can't seem to find this list. Do you have an URL for
On Tue, Feb 26, 2013 at 06:01:39PM +0100, Sašo Kiselkov wrote:
On 02/26/2013 05:57 PM, Eugen Leitl wrote:
On Tue, Feb 26, 2013 at 06:51:08AM -0800, Gary Driggs wrote:
On Feb 26, 2013, at 12:44 AM, Sašo Kiselkov wrote:
I'd also recommend that you go and subscribe to z...@lists.illumos.org,
Be careful when testing ZFS with ozone, I ran a bunch of stats many
years ago that produced results that did not pass a basic sanity check. There
was *something* about the ozone test data that ZFS either did not like or liked
very much, depending on the specific test.
I
On Feb 26, 2013, at 12:33 AM, Tiernan OToole lsmart...@gmail.com wrote:
Thanks all! I will check out FreeNAS and see what it can do... I will also
check my RAID Card and see if it can work with JBOD... fingers crossed... The
machine has a couple internal SATA ports (think there are 2, could
Besides copying data from /dev/md/dsk/x volume manager filesystems to new
zfs filesystems
does anyone know of any zfs conversion tools to make the
conversion/migration from svm to zfs
easier?
Thanks
Morris Hooten
Unix SME
Integrated Technology Delivery
mhoo...@us.ibm.com
Office:
Robert Milkowski wrote:
Solaris 11.1 (free for non-prod use).
But a ticking bomb if you use a cache device.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Robert Milkowski wrote:
Solaris 11.1 (free for non-prod use).
But a ticking bomb if you use a cache device.
It's been fixed in SRU (although this is only for customers with a support
contract - still, will be in 11.2 as well).
Then, I'm sure there are other bugs which are fixed in
Robert Milkowski wrote:
Robert Milkowski wrote:
Solaris 11.1 (free for non-prod use).
But a ticking bomb if you use a cache device.
It's been fixed in SRU (although this is only for customers with a support
contract - still, will be in 11.2 as well).
Then, I'm sure there are other bugs
1 - 100 of 45591 matches
Mail list logo