Re: [zfs-discuss] ZFS QoS and priorities

2012-12-01 Thread Nikola M.

On 11/29/12 10:56 AM, Jim Klimov wrote:

For example, I might want to have corporate webshop-related
databases and appservers to be the fastest storage citizens,
then some corporate CRM and email, then various lower priority
zones and VMs, and at the bottom of the list - backups.

AFAIK, now such requests would hit the ARC, then the disks if
needed - in no particular order. Well, can the order be made
particular with current ZFS architecture, i.e. by setting
some datasets to have a certain NICEness or another priority
mechanism?
Something like that is implemented in Joyent's Illumos-based 
distribution, Smartos.
(Illumos is open source continuation of Opensolaris OS/Net as well as 
Solaris11 is closed one)

After them, it is implemented also in Openindiana/Illumos , possibly others.
List of Illumos based distributions: 
http://wiki.illumos.org/display/illumos/Distributions


It is using Solaris Zones and throttling their disk usage on that level,
so you separate workload processes on separate zones.
Or even put KVM machines under the zones (Joyent and OI support 
Joyent-written KVM/Intel implementation in Illumos)  for the same reason 
of I/O throttling.


They (Joyent) say that their solution is made in not too much code, but 
gives very good results (they run massive cloud computing service, with 
many zones and KVM VM's so they might know).

http://wiki.smartos.org/display/DOC/Tuning+the+IO+Throttle
http://dtrace.org/blogs/wdp/2011/03/our-zfs-io-throttle/

I don't know it is available/applicable to (now) closed OS/Net of 
Solaris11 and Solaris10, because Joyent/Illumos have access to complete 
stack and are actively changing it to suit their needs,
as good example of benefits of open source/free software stack. But 
maybe it is.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS QoS and priorities

2012-12-01 Thread Nikola M.

On 12/ 2/12 03:24 AM, Nikola M. wrote:

It is using Solaris Zones and throttling their disk usage on that level,
so you separate workload processes on separate zones.
Or even put KVM machines under the zones (Joyent and OI support 
Joyent-written KVM/Intel implementation in Illumos)  for the same 
reason of I/O throttling.


They (Joyent) say that their solution is made in not too much code, 
but gives very good results (they run massive cloud computing service, 
with many zones and KVM VM's so they might know).

http://wiki.smartos.org/display/DOC/Tuning+the+IO+Throttle
http://dtrace.org/blogs/wdp/2011/03/our-zfs-io-throttle/

There is short video from 16th minute onward, from BayLISA meetup at 
Joyent, August 16, 2012

https://www.youtube.com/watch?v=6csFi0D5eGY
Talking about ZFS Throttle implementation architecture in Illumos , from 
Joyent's Smartos.

I learned it is also available in Entic.net-sponsored Openindiana
and probably in Nexenta, too, since it is implemented inside Illumos.

N.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS QoS and priorities

2012-12-01 Thread Nikola M.

On 12/ 2/12 05:19 AM, Richard Elling wrote:

On Dec 1, 2012, at 6:54 PM, Nikola M. minik...@gmail.com wrote:


On 12/ 2/12 03:24 AM, Nikola M. wrote:

It is using Solaris Zones and throttling their disk usage on that level,
so you separate workload processes on separate zones.
Or even put KVM machines under the zones (Joyent and OI support Joyent-written 
KVM/Intel implementation in Illumos)  for the same reason of I/O throttling.

They (Joyent) say that their solution is made in not too much code, but gives 
very good results (they run massive cloud computing service, with many zones 
and KVM VM's so they might know).
http://wiki.smartos.org/display/DOC/Tuning+the+IO+Throttle
http://dtrace.org/blogs/wdp/2011/03/our-zfs-io-throttle/


There is short video from 16th minute onward, from BayLISA meetup at Joyent, 
August 16, 2012
https://www.youtube.com/watch?v=6csFi0D5eGY
Talking about ZFS Throttle implementation architecture in Illumos , from 
Joyent's Smartos.

There was a good presentation on this at the OpenStorage Summit in 2011.
Look for it on youtube.


I learned it is also available in Entic.net-sponsored Openindiana
and probably in Nexenta, too, since it is implemented inside Illumos.

NexentaStor 3.x is not an illumos-based distribution, it is based on OpenSolaris
b134.
Oh yes, but I had Nexenta in general in mind, where NexentaStor 
community edition is based on Illumos. GDAmore (Illumos founder) is from 
Nexenta after all.

It is good one can get support/storage from Nexenta.
And it is alive thing, developing, future etc.

And looking at OpenStorage Summit, i forget mentioning Delphix , having 
also developer previously in Sun , and selling software appliances.


Last info I got about Illumos is that this kind of enhancements to 
Ilumos does not go set automatically upstream to Illumos, but it is on 
distributions to choose what to include.


And yes. there are summits:
http://www.nexenta.com/corp/nexenta-tv/openstorage-summit
http://www.openstoragesummit.org/emea/index.html

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2011-12-27 Thread Nikola M.
On 12/27/11 09:20 PM, Frank Cusack wrote:
 http://sparcv9.blogspot.com/2011/12/solaris-11-illumos-and-source.html

 If I upgrade ZFS to use the new features in Solaris 11 I will be
 unable to import my pool using the free ZFS implementation that is
 available in illumos based distributions


 Is that accurate?  I understand if the S11 version is ahead of
 illumos, of course I can't use the same pools in both places, but that
 is the same problem as using an S11 pool on S10.  The author is
 implying a much worse situation, that there are zfs tracks in
 addition to versions and that S11 is now on a different track and an
 S11 pool will not be usable elsewhere, ever.  I hope it's just a
 misrepresentation.
I used to have rpool from 2009.06 Opensolaris , updated over snv_134 to
both Openindiana with Illumos and to the Solaris11 express in separate
Boot environment, but at the same ZFS rpool.

Since then, Oracle removed both pkg.opensolaris.org/release and /dev and
also seems that there is no Solaris11 express IPS publisher to be found
anymore.

So, one could use pkg.openindiana.org/legacy to update to snv_134 (Not
the Oracle's snv_134b needed to S11Express upgrade) and to upgrade to
Openindiana latest /dev as described on openindiana.org Wiki.

But for Updating to snv_134b fro Oracle and Solaris11 Express, prior to
updating to Solaris11 on the same rpool,
one would need to download and activate local IPS repository of both of
them and install from there, since Oracle pulled the plug from both,
most probably to actually stop just that thing:
 Ability to have Openindiana and Solaris11 on the same rpool, upgraded
from snv_134 ,
because S11 ZFS is closed source and newer version number, and
therefore not usable for any implementation but Oracle's (including
Illumos, Zfs-fuse, ZfsonLinux and FreeBSD implementations)

Recent S11 source code leak might help as a blueprint for implementing
maybe compatible implementations in other OS'es but Oracle, but it needs
to be re-written, and not copied due to Oracle's copyright.

So it is possible to have S11 and Openindiana/Illumos on same rpool.
Just jou need both snv_134b Opensolaris and S11Express IPS publisher to
update from.
You can put them up from repository archives and if you do, share a
cookbook for it, OK?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-26 Thread Nikola M.
On 04/26/11 01:56 AM, Lamp Zy wrote:
 Hi,

 One of my drives failed in Raidz2 with two hot spares:
What are zpool/zfs versions? (zpool upgrade Ctrl+c, zfs upgrade Cttr+c).
Latest zpool/zfs versions available by numerical designation in all
OpenSolaris based distributions, are zpool 28 and zfs v. 5. (That is why
one should Not update so S11Ex Zfs/Zpool version if wanting to use/have
installed or continue using in multiple Zfs BE's other open OpenSolaris
based distributions)

What OS are you using with ZFS?
Do you use Solaris 10/update release, Solaris11Express, OpenIndiana
oi_148 dev/ 148b with IllumOS, OpenSolaris 2009.06/snv_134b, Nexenta,
Nexenta Community, Schillix, FreeBSD, Linux zfs-fuse.. (I guess still
not using Linux with Zfs kernel module, but just to mention it
available.. and OSX too).

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] aclmode - no zfs in heterogeneous networks anymore?

2011-04-26 Thread Nikola M.
I am forwarding this to openindiana-disc...@openindiana.org list,
with hope of wider audience  regarding question.

 Original Message 
Message-ID: 4db68e08.9040...@googlemail.com
Date: Tue, 26 Apr 2011 11:19:04 +0200
From: achim...@googlemail.com achim...@googlemail.com
List-Id: zfs-discuss.opensolaris.org

Hi!

We are setting up a new file server on an OpenIndiana box (oi_148). The
spool is run-in version 28, so the aclmode option is gone. The server
has to serve files to Linux, OSX and windows. Because of the missing
aclmode option, we are getting nuts with the file permissions.

I read a whole lot about the problem and the pros and cons of the
decision of dropping that option in zfs, but I absolutely read nothing
about a solution or work around.

The problem is, that gnome's nautilus as well as OSX' finder perform a
chmod after writing a file over ifs, causing all ACLs to vanish.

If there is no solution, zfs seems to be dead. How do you solve this
problem?

Achim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-07 Thread Nikola M.
On 04/ 6/11 07:14 PM, Brandon High wrote:
 On Tue, Apr 5, 2011 at 12:38 PM, Joe Auty j...@netmusician.org
 mailto:j...@netmusician.org wrote:

 How about getting a little more crazy... What if this entire
 server temporarily hosting this data was a VM guest running ZFS? I
 don't foresee this being a problem either, but with so


 The only thing to watch out for is to make sure that the receiving
 datasets aren't a higher version that the zfs version that you'll be
 using on the replacement server. Because you can't downgrade a
 dataset, using snv_151a and planning to send to Nexenta as a final
 step will trip you up unless you explicitly create them with a lower
 version.
Yes, that is exactly why one thinking about using something with more
liberal license then Solaris11 with payed license, should first install
latest OpenSolaris form snv_134 (Or 2009.06 then upgrade to /dev
Opensolaris 134) and then it can choose upgrade path to Both openIndiana
oi_148(b) and S11Ex on same zpool.

That way, zpool and zfs versions can stay on versions supported by OI
and Nexenta (and Schillix and FreeBSD and ZFS-Fuse on Linux and Zfs
Native on Linux in development) and one can experiment with more systems
supporting ZFS then only being locked in S11Ex.

If you sadly choose to install from closed S11Ex disc and not from Osol
snv_134 CD
(www.genunix.org snv_134  .ISO) and upgrade to OpenIndiana OI_xxx dev.
release and/orS11ex, then you might loose ability to use anything but
closed Solaris from Oracle, so be clever and you can use upgrade path
explained.

Of course, you can have as much Boot Environments (BE) on same zpool as
you like, since they basically behave like separate OS installs to boot
from the same zpool, that is the beauty of ZFS/(Open)Solaris based
distributions.
Just do NOT do upgrade to newest closed zpool/zfs version from S11Ex!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] best migration path from Solaris 10

2011-03-23 Thread Nikola M.

On 03/23/11 09:07 AM, Pawel Jakub Dawidek wrote:

On Sun, Mar 20, 2011 at 01:54:54PM +0700, Fajar A. Nugraha wrote:

On Sun, Mar 20, 2011 at 4:05 AM, Pawel Jakub Dawidekp...@freebsd.org  wrote:

On Fri, Mar 18, 2011 at 06:22:01PM -0700, Garrett D'Amore wrote:

Newer versions of FreeBSD have newer ZFS code.


Yes, we are at v28 at this point (the lastest open-source version).


That said, ZFS on FreeBSD is kind of a 2nd class citizen still. [...]


That's actually not true. There are more FreeBSD committers working on
ZFS than on UFS.


How is the performance of ZFS under FreeBSD? Is it comparable to that
in Solaris, or still slower due to some needed compatibility layer?


This compatibility layer is just a bunch of ugly defines, etc. to allow
for less code modifications. It introduces no overhead.

I made performance comparison between FreeBSD 9 with ZFSv28 and Solaris
11 Express, but I don't think Solaris license allows me to publish the
results. But believe me, the results were very surprising:)


You can compare OpenIndiana oi_148 (and oi148a with IllumOS) and publish 
comparisons.
I think site: Phoronix.com already did comparisons with ZFS under 
several platforms and other (Linux) file systems without sweat.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] best migration path from Solaris 10

2011-03-19 Thread Nikola M.
On 03/19/11 12:17 AM, Toby Thain wrote:
 On 18/03/11 5:56 PM, Paul B. Henson wrote:
 We've been running Solaris 10 for the past couple of years, primarily to
 leverage zfs to provide storage for about 40,000 faculty, staff, and
 students ... and at this point want to start reevaluating our best
 migration option to move forward from Solaris 10.

 There's really nothing else available that is comparable to zfs (perhaps
 btrfs someday in the indefinite future, but who knows when that day
 might come), so our options would appear to be Solaris 11 Express,
 Nexenta (either NexentaStor or NexentaCore), and OpenIndiana (FreeBSD is
 occasionally mentioned as a possibility, but I don't really see that as
 suitable for our enterprise needs).
Questions are: Do you care of your OS being open and not tight to only
one company, and do you care for software and packaging compatibility
and do you need payed support or not and do you need it right now or in
the future?
Do you want to tie yourself with Oracle and closed Solaris products?
(even if unofficially there were saying that they might open code after
S11 release)
If you used closed product before, that might be your enterprise upgrade
path. Just prepare to cache Oracle out and that is it.

If you want to use free open source with ability to buy suport and all
you want to use is zfs,
then Nexenta is your way with their both free to use releases and
commercially supported ones.
Nexenta support development of Illumos that is future base of
OpenIndiana, too.
So Nexenta is something like what Sun previously was doing, they are
actively developing it and you can have support for less money then from
Oracle, I suppose.

OpenIndiana is and will contiue to be closest you can get to Oracle
Solaris releases. It shares software consolidations (and packaging,
IPS,pkg) with closed brother. OpenIndiana has stable release in mind in
near future, that might suit your needs.
Dev OpenIndiana releases are (slowly) following path of OpenSolaris dev
releases,
so OpenIndiana can be right now Solaris 10 replacement (many people just
continued to use OI dev) and in the future, with transition to Illumos
base ahead in mind.

I think that best thing you can do is to install OpenSolaris snv_134 (or
134b) and from that point you can see where you can go: To OpenIndiana
dev and then follow Illumos development and wait for OpenIndiana stable
, And try even closed Solaris Express 11.
(with No zfs upgrade to Solaris Express version (!) - Be sure Not to do
zfs and zpool upgrade to closed Solaris 11 express version, because you
will be then locked-in in Oracle zfs versions.)
I do not know how Nexenta could be installed in the same zpool in new BE
but I suppose it can, since I know upgrading Nexenta use zfs BE's, too.

That way, with multiple installs and sharing zfs between them, you are
on safe ground of being able to test and choose to what will come in
future and ,beside Oracle, there are at least 2 solutions now and in the
future, that you can consider.

I would personally like if one could buy support from Nexenta and
continue to use OpenIndiana or Nexenta :) But Nexenta is more
server-like and OpenIndiana is shooting to all-around solution.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Newbie ZFS Question: RAM for Dedup

2010-10-20 Thread Nikola M
Orvar Korvar wrote:
 Sometimes you read about people having low performance deduping: it is 
 because they have too little RAM.
   
I mostly heard they have low performance when they start deleting
deduplicated data, not before that.

So do you think that with 2.2GB of RAM per 1 TB of storage, with 128Kb
blocks, deduplication will have no performance impact when deleting
deduped data?

Or it is like everyone was saying, that slow deleting of deduplicated
data is something that is/to be fixed in further ZFS development?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to migrate to 4KB sector drives?

2010-09-12 Thread Nikola M
Orvar Korvar wrote:
 ZFS does not handle 4K sector drives well, you need to create a new zpool 
 with 4K property (ashift) set. 
 http://www.solarismen.de/archives/5-Solaris-and-the-new-4K-Sector-Disks-e.g.-WDxxEARS-Part-2.html

 Are there plans to allow resilver to handle 4K sector drives?
   
Not sure about resilvering to 4K but as manual for Solaris under link
you provided is describing,
it can make new zpools aligned to 4K.

Untill OpenIndiana.org come to life,
maybe in meantime you can try to build Illumos OS/Net testing part, have
it in separate BE on top of 134b Opensolaris from genunix.org and try to
_for testing_ 4K sector drives rpool instructions you found, on it.
http://www.illumos.org/projects/illumos-gate/wiki/How_To_Build_illumos

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Encryption?

2010-07-12 Thread Nikola M
Freddie Cash wrote:
 You definitely want to do the ZFS bits from within FreeBSD.
Why not using ZFS in OpenSolaris? At least it has most stable/tested
implementation and also the newest one if needed?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss