Re: [OmniOS-discuss] zfs send | recv

2018-06-11 Thread Oliver Weinmann
I think I really have to start investigating using 3rd party apps again. 
Nexenta doesn't let me change the zfs send command. I can only adjust settings 
for the autosync job.




[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]

Oliver Weinmann
Head of Corporate ICT

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller


From: OmniOS-discuss  On Behalf Of 
Guenther Alka
Sent: Montag, 11. Juni 2018 12:16
To: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] zfs send | recv

I suppose you can either keep the last snaps identical on source and target 
with a simple zfs send recursively or
you need a script that cares about and does a send per filesystem to allow a 
different number of snaps on the target system.

This is not related to Nexenta but I saw the same on current OmniOS -> OmniOS 
as they use the same Open-ZFS base Illumos

Gea
@naapp-it.org
Am 11.06.2018 um 10:58 schrieb Oliver Weinmann:
Yes it is recursively. We have hundreds of child datasets so single filesystems 
would be a real headache to maintain. L




[cid:image001.png@01D40180.1FD80740]

Oliver Weinmann
Head of Corporate ICT

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
www.telespazio-vega.de<http://www.telespazio-vega.de>

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller


From: OmniOS-discuss 
<mailto:omnios-discuss-boun...@lists.omniti.com>
 On Behalf Of Guenther Alka
Sent: Montag, 11. Juni 2018 09:55
To: omnios-discuss@lists.omniti.com<mailto:omnios-discuss@lists.omniti.com>
Subject: Re: [OmniOS-discuss] zfs send | recv

did you replicate recursively?
keeping a different snap history should be possible when you send single 
filesystems.


gea
@napp-it.org
Am 11.06.2018 um 09:11 schrieb Oliver Weinmann:
Hi,


We are replicating snapshots from a Nexenta system to an OmniOS system. Nexenta 
calls this feature autosync. While they say it is only 100% supported between 
nexenta systems, we managed to get it working with OmniOS too. It's Not rocket 
science. But there is one big problem. In the autosync job on the Nexenta 
system one can specify how many snaps to keep local on the nexenta and how many 
to keep on the target system. Somehow we always have the same amount of snaps 
on both systems. Autosync always cleans all snaps on the dest that don't exist 
on the source. I contacted nexenta support and they told me that this is due to 
different versions of zfs send and zfs recv. There should be a -K  flag, that 
instructs the destination to not destroy snapshots that don't exist on the 
source. Is such a flag available in OmniOS? I assume the flag is set on the 
sending side so that the receiving side has to understand it.



Best Regards,

Oliver




[cid:image001.png@01D40180.1FD80740]

Oliver Weinmann
Head of Corporate ICT

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
www.telespazio-vega.de<http://www.telespazio-vega.de>

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller





___

OmniOS-discuss mailing list

OmniOS-discuss@lists.omniti.com<mailto:OmniOS-discuss@lists.omniti.com>

http://lists.omniti.com/mailman/listinfo/omnios-discuss


--



--

H  f   G

Hochschule für Gestaltung

university of design



Schwäbisch Gmünd

Rektor-Klaus Str. 100

73525 Schwäbisch Gmünd



Guenther Alka, Dipl.-Ing. (FH)

Leiter des Rechenzentrums

head of computer center



Tel 07171 602 627

Fax 07171 69259

guenther.a...@hfg-gmuend.de<mailto:guenther.a...@hfg-gmuend.de>

http://rz.hfg-gmuend.de
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] zfs send | recv

2018-06-11 Thread Guenther Alka
I suppose you can either keep the last snaps identical on source and 
target with a simple zfs send recursively or
you need a script that cares about and does a send per filesystem to 
allow a different number of snaps on the target system.


This is not related to Nexenta but I saw the same on current OmniOS -> 
OmniOS as they use the same Open-ZFS base Illumos


Gea
@naapp-it.org

Am 11.06.2018 um 10:58 schrieb Oliver Weinmann:


Yes it is recursively. We have hundreds of child datasets so single 
filesystems would be a real headache to maintain. L


*Oliver Weinmann*
Head of Corporate ICT

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de 
<mailto:oliver.weinm...@telespazio-vega.de>

www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: 
Darmstadt, HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller


*From:*OmniOS-discuss  *On 
Behalf Of *Guenther Alka

*Sent:* Montag, 11. Juni 2018 09:55
*To:* omnios-discuss@lists.omniti.com
*Subject:* Re: [OmniOS-discuss] zfs send | recv

did you replicate recursively?
keeping a different snap history should be possible when you send 
single filesystems.



gea
@napp-it.org

Am 11.06.2018 um 09:11 schrieb Oliver Weinmann:

Hi,

We are replicating snapshots from a Nexenta system to an OmniOS
system. Nexenta calls this feature autosync. While they say it is
only 100% supported between nexenta systems, we managed to get it
working with OmniOS too. It’s Not rocket science. But there is one
big problem. In the autosync job on the Nexenta system one can
specify how many snaps to keep local on the nexenta and how many
to keep on the target system. Somehow we always have the same
amount of snaps on both systems. Autosync always cleans all snaps
on the dest that don’t exist on the source. I contacted nexenta
support and they told me that this is due to different versions of
zfs send and zfs recv. There should be a –K  flag, that instructs
the destination to not destroy snapshots that don't exist on the
source. Is such a flag available in OmniOS? I assume the flag is
set on the sending side so that the receiving side has to
understand it.

Best Regards,

Oliver

*Oliver Weinmann*
Head of Corporate ICT

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de
<mailto:oliver.weinm...@telespazio-vega.de>
www.telespazio-vega.de <http://www.telespazio-vega.de>

Registered office/Sitz: Darmstadt, Register court/Registergericht:
Darmstadt, HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller




___

OmniOS-discuss mailing list

OmniOS-discuss@lists.omniti.com
<mailto:OmniOS-discuss@lists.omniti.com>

http://lists.omniti.com/mailman/listinfo/omnios-discuss



--


--
H  f   G
Hochschule für Gestaltung
university of design

Schwäbisch Gmünd
Rektor-Klaus Str. 100
73525 Schwäbisch Gmünd

Guenther Alka, Dipl.-Ing. (FH)
Leiter des Rechenzentrums
head of computer center

Tel 07171 602 627
Fax 07171 69259
guenther.a...@hfg-gmuend.de
http://rz.hfg-gmuend.de

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] zfs send | recv

2018-06-11 Thread Oliver Weinmann
Yes it is recursively. We have hundreds of child datasets so single filesystems 
would be a real headache to maintain. :(




[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]

Oliver Weinmann
Head of Corporate ICT

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller


From: OmniOS-discuss  On Behalf Of 
Guenther Alka
Sent: Montag, 11. Juni 2018 09:55
To: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] zfs send | recv

did you replicate recursively?
keeping a different snap history should be possible when you send single 
filesystems.


gea
@napp-it.org
Am 11.06.2018 um 09:11 schrieb Oliver Weinmann:
Hi,


We are replicating snapshots from a Nexenta system to an OmniOS system. Nexenta 
calls this feature autosync. While they say it is only 100% supported between 
nexenta systems, we managed to get it working with OmniOS too. It's Not rocket 
science. But there is one big problem. In the autosync job on the Nexenta 
system one can specify how many snaps to keep local on the nexenta and how many 
to keep on the target system. Somehow we always have the same amount of snaps 
on both systems. Autosync always cleans all snaps on the dest that don't exist 
on the source. I contacted nexenta support and they told me that this is due to 
different versions of zfs send and zfs recv. There should be a -K  flag, that 
instructs the destination to not destroy snapshots that don't exist on the 
source. Is such a flag available in OmniOS? I assume the flag is set on the 
sending side so that the receiving side has to understand it.



Best Regards,

Oliver




[cid:image001.png@01D40173.314884D0]

Oliver Weinmann
Head of Corporate ICT

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de<mailto:oliver.weinm...@telespazio-vega.de>
www.telespazio-vega.de<http://www.telespazio-vega.de>

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller






___

OmniOS-discuss mailing list

OmniOS-discuss@lists.omniti.com<mailto:OmniOS-discuss@lists.omniti.com>

http://lists.omniti.com/mailman/listinfo/omnios-discuss



--
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] zfs send | recv

2018-06-11 Thread priyadarshan
Hi Oliver,

Thank you for sharing your use-case on Nexenta.

A few years ago we also needed to have a DR site, and I tested several third 
party tools.

The introduction of ZFS to Linux, with ZOL, made things more confused, by 
adding several more tools (like sanoid: https://github.com/jimsalterjrs/sanoid).

Ultimately we settled on a simple tool, which we tested over the years with 
satisfactory results (zfsnap).

Of course, if Nexenta will not allow shell access anymore, that makes it a bit 
more difficult.

I would defer to more expert people, like Guenther, who also replied to your 
message.

Kind regards,

Priyadarshan


> On 11 Jun 2018, at 10:07, Oliver Weinmann 
>  wrote:
> 
> Hi Priyadarshan,
> 
> Thanks a lot for the quick and comprehensive answer. I agree that using a 
> third party tool might be helpful. When we started using the two ZFS systems, 
> I really had a hard time testing a few third party tools. One of the biggest 
> problems was that I wanted to be able to use the omnios systems as a DR site. 
> However flipping the mirror always caused the nexenta system to crash. So 
> until today there is no real solution to use the omnios system as a real DR 
> site. This is due to different versions of zfs send and recv on the two 
> systems and not related to using third party tools. I have tested zrep as it 
> contains a DR mode and looked at znapzend but had not time to test it. We 
> were told that the new version of nexenta no longer supports an ordinary way 
> to sync snaps to a non nexenta system as there is no shell access anymore. 
> Nexenta 5.x provides an API for this. I need to find some time to test it.
> 
> Best Regards,
> Oliver
> 
> 
> 
> 
> 
> Oliver Weinmann
> Head of Corporate ICT
> Telespazio VEGA Deutschland GmbH
> Europaplatz 5 - 64293 Darmstadt - Germany
> Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
> oliver.weinm...@telespazio-vega.de
> www.telespazio-vega.de
> Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
> HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller-Original 
> Message-
> From: priyadarshan 
> Sent: Montag, 11. Juni 2018 09:46
> To: Oliver Weinmann 
> Cc: omnios-discuss@lists.omniti.com
> Subject: Re: [OmniOS-discuss] zfs send | recv
> 
> 
> 
>> On 11 Jun 2018, at 09:11, Oliver Weinmann 
>>  wrote:
>> 
>> Hi,
>> 
>> We are replicating snapshots from a Nexenta system to an OmniOS system. 
>> Nexenta calls this feature autosync. While they say it is only 100% 
>> supported between nexenta systems, we managed to get it working with OmniOS 
>> too. It’s Not rocket science. But there is one big problem. In the autosync 
>> job on the Nexenta system one can specify how many snaps to keep local on 
>> the nexenta and how many to keep on the target system. Somehow we always 
>> have the same amount of snaps on both systems. Autosync always cleans all 
>> snaps on the dest that don’t exist on the source. I contacted nexenta 
>> support and they told me that this is due to different versions of zfs send 
>> and zfs recv. There should be a –K  flag, that instructs the destination to 
>> not destroy snapshots that don't exist on the source. Is such a flag 
>> available in OmniOS? I assume the flag is set on the sending side so that 
>> the receiving side has to understand it.
>> 
>> Best Regards,
>> Oliver
>> 
> 
> Hello,
> 
> OmniOS devs please correct me if mistaken, I believe OmniOS faithfully tracks 
> zfs from illumos-gate.
> 
> One can follow various upstream merges here:
> https://github.com/omniosorg/illumos-omnios/pulls?q=is%3Apr+is%3Aclosed
> 
> Based on that, illumos man pages also apply to OmniOS: 
> https://omnios.omniti.com/wiki.php/ManSections
> 
> Illumos zfs man page is here: https://illumos.org/man/1m/zfs
> 
> That page does not seem to offer a -K flag.
> 
> You may want to consider third party tools.
> 
> We have a very similar use-case as you detailed, fulfilled by using zfsnap, 
> with reliable and consistent results.
> 
> git repository: https://github.com/zfsnap/zfsnap
> site: http://www.zfsnap.org/
> man page: http://www.zfsnap.org/zfsnap_manpage.html
> 
> With zfsnap we have been maintaining (almost) live replicas of mail servers, 
> including snapshotting, either automatically synchronised to master, or kept 
> aside for special needs.
> 
> One just needs to tweak a shell script (or simply, one or more cron jobs) to 
> what is desired.
> 
> 
> Priyadarshan
> 
> 

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] zfs send | recv

2018-06-11 Thread Guenther Alka

did you replicate recursively?
keeping a different snap history should be possible when you send single 
filesystems.



gea
@napp-it.org

Am 11.06.2018 um 09:11 schrieb Oliver Weinmann:


Hi,

We are replicating snapshots from a Nexenta system to an OmniOS 
system. Nexenta calls this feature autosync. While they say it is only 
100% supported between nexenta systems, we managed to get it working 
with OmniOS too. It’s Not rocket science. But there is one big 
problem. In the autosync job on the Nexenta system one can specify how 
many snaps to keep local on the nexenta and how many to keep on the 
target system. Somehow we always have the same amount of snaps on both 
systems. Autosync always cleans all snaps on the dest that don’t exist 
on the source. I contacted nexenta support and they told me that this 
is due to different versions of zfs send and zfs recv. There should be 
a –K  flag, that instructs the destination to not destroy snapshots 
that don't exist on the source. Is such a flag available in OmniOS? I 
assume the flag is set on the sending side so that the receiving side 
has to understand it.


Best Regards,

Oliver

*Oliver Weinmann*
Head of Corporate ICT

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de 


www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: 
Darmstadt, HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller




___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


--

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] zfs send | recv

2018-06-11 Thread Oliver Weinmann
Hi Priyadarshan,

Thanks a lot for the quick and comprehensive answer. I agree that using a third 
party tool might be helpful. When we started using the two ZFS systems, I 
really had a hard time testing a few third party tools. One of the biggest 
problems was that I wanted to be able to use the omnios systems as a DR site. 
However flipping the mirror always caused the nexenta system to crash. So until 
today there is no real solution to use the omnios system as a real DR site. 
This is due to different versions of zfs send and recv on the two systems and 
not related to using third party tools. I have tested zrep as it contains a DR 
mode and looked at znapzend but had not time to test it. We were told that the 
new version of nexenta no longer supports an ordinary way to sync snaps to a 
non nexenta system as there is no shell access anymore. Nexenta 5.x provides an 
API for this. I need to find some time to test it.

Best Regards,
Oliver





Oliver Weinmann
Head of Corporate ICT
Telespazio VEGA Deutschland GmbH
 Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de
www.telespazio-vega.de
Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Geschäftsführer: Sigmar Keller-Original 
Message-
From: priyadarshan 
Sent: Montag, 11. Juni 2018 09:46
To: Oliver Weinmann 
Cc: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] zfs send | recv



> On 11 Jun 2018, at 09:11, Oliver Weinmann 
>  wrote:
>
> Hi,
>
> We are replicating snapshots from a Nexenta system to an OmniOS system. 
> Nexenta calls this feature autosync. While they say it is only 100% supported 
> between nexenta systems, we managed to get it working with OmniOS too. It’s 
> Not rocket science. But there is one big problem. In the autosync job on the 
> Nexenta system one can specify how many snaps to keep local on the nexenta 
> and how many to keep on the target system. Somehow we always have the same 
> amount of snaps on both systems. Autosync always cleans all snaps on the dest 
> that don’t exist on the source. I contacted nexenta support and they told me 
> that this is due to different versions of zfs send and zfs recv. There should 
> be a –K  flag, that instructs the destination to not destroy snapshots that 
> don't exist on the source. Is such a flag available in OmniOS? I assume the 
> flag is set on the sending side so that the receiving side has to understand 
> it.
>
> Best Regards,
> Oliver
>

Hello,

OmniOS devs please correct me if mistaken, I believe OmniOS faithfully tracks 
zfs from illumos-gate.

One can follow various upstream merges here:
https://github.com/omniosorg/illumos-omnios/pulls?q=is%3Apr+is%3Aclosed

Based on that, illumos man pages also apply to OmniOS: 
https://omnios.omniti.com/wiki.php/ManSections

Illumos zfs man page is here: https://illumos.org/man/1m/zfs

That page does not seem to offer a -K flag.

You may want to consider third party tools.

We have a very similar use-case as you detailed, fulfilled by using zfsnap, 
with reliable and consistent results.

git repository: https://github.com/zfsnap/zfsnap
site: http://www.zfsnap.org/
man page: http://www.zfsnap.org/zfsnap_manpage.html

With zfsnap we have been maintaining (almost) live replicas of mail servers, 
including snapshotting, either automatically synchronised to master, or kept 
aside for special needs.

One just needs to tweak a shell script (or simply, one or more cron jobs) to 
what is desired.


Priyadarshan


___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] zfs send | recv

2018-06-11 Thread priyadarshan


> On 11 Jun 2018, at 09:11, Oliver Weinmann 
>  wrote:
> 
> Hi,
>  
> We are replicating snapshots from a Nexenta system to an OmniOS system. 
> Nexenta calls this feature autosync. While they say it is only 100% supported 
> between nexenta systems, we managed to get it working with OmniOS too. It’s 
> Not rocket science. But there is one big problem. In the autosync job on the 
> Nexenta system one can specify how many snaps to keep local on the nexenta 
> and how many to keep on the target system. Somehow we always have the same 
> amount of snaps on both systems. Autosync always cleans all snaps on the dest 
> that don’t exist on the source. I contacted nexenta support and they told me 
> that this is due to different versions of zfs send and zfs recv. There should 
> be a –K  flag, that instructs the destination to not destroy snapshots that 
> don't exist on the source. Is such a flag available in OmniOS? I assume the 
> flag is set on the sending side so that the receiving side has to understand 
> it.
>  
> Best Regards,
> Oliver
> 

Hello,

OmniOS devs please correct me if mistaken, I believe OmniOS faithfully tracks 
zfs from illumos-gate.

One can follow various upstream merges here:
https://github.com/omniosorg/illumos-omnios/pulls?q=is%3Apr+is%3Aclosed

Based on that, illumos man pages also apply to OmniOS: 
https://omnios.omniti.com/wiki.php/ManSections

Illumos zfs man page is here: https://illumos.org/man/1m/zfs

That page does not seem to offer a -K flag. 

You may want to consider third party tools.

We have a very similar use-case as you detailed, fulfilled by using zfsnap, 
with reliable and consistent results.

git repository: https://github.com/zfsnap/zfsnap
site: http://www.zfsnap.org/
man page: http://www.zfsnap.org/zfsnap_manpage.html

With zfsnap we have been maintaining (almost) live replicas of mail servers, 
including snapshotting, either automatically synchronised to master, or kept 
aside for special needs.

One just needs to tweak a shell script (or simply, one or more cron jobs) to 
what is desired.


Priyadarshan


___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] zfs send | recv

2018-06-11 Thread Oliver Weinmann
Hi,


We are replicating snapshots from a Nexenta system to an OmniOS system. Nexenta 
calls this feature autosync. While they say it is only 100% supported between 
nexenta systems, we managed to get it working with OmniOS too. It's Not rocket 
science. But there is one big problem. In the autosync job on the Nexenta 
system one can specify how many snaps to keep local on the nexenta and how many 
to keep on the target system. Somehow we always have the same amount of snaps 
on both systems. Autosync always cleans all snaps on the dest that don't exist 
on the source. I contacted nexenta support and they told me that this is due to 
different versions of zfs send and zfs recv. There should be a -K  flag, that 
instructs the destination to not destroy snapshots that don't exist on the 
source. Is such a flag available in OmniOS? I assume the flag is set on the 
sending side so that the receiving side has to understand it.



Best Regards,

Oliver




[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]

Oliver Weinmann
Head of Corporate ICT

Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: +49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
oliver.weinm...@telespazio-vega.de
www.telespazio-vega.de

Registered office/Sitz: Darmstadt, Register court/Registergericht: Darmstadt, 
HRB 89231; Managing Director/Gesch?ftsf?hrer: Sigmar Keller


___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss