Not really.
There is one tool called rclone. But it will do a full upload on any
changed file so does not help.
There are some FUSE file systems but I don't believe these will work with
Illumos either.
I'd really like to avoid moving my pool if I don't have to. OmniOS has
been great.
On Wed,
You can create an LX zone with the latest stable Omni release, share the
dataset(s) with the zone and run the backup agent in there. That's what I'm
using for a bunch of things such as Veeam, BeeGFS, and Plex. Works like a charm
as long as you don't need extended attribute support.
Michael
On Thu, 5 Jan 2017 01:15:18 +0100
Michael Rasmussen wrote:
> Would it be possible to use rsync with backblaze?
> rsync handles sparse files equally well using option --sparse
>
A quick google:
http://www.rsync.net/resources/howto/unix.html
On Wed, 04 Jan 2017 23:59:15 +
Mini Trader wrote:
>
> If anyone has any recommendations for an incremental cloud storage solution
> that is compatible with OmniOS it would be greatly appreciated. I realize
> that ZFS send works quite well but haven't found an off
That is unfortunate.
The requirement stems from the need to perform an incremental backup to
cloud storage (backblaze).
Many of my files are large and sparse (VMs). Unfortunately the only
software that I have found to perform the backups runs on BSD or Linux
hence the use for NFS.
Running the
> On Jan 4, 2017, at 4:57 PM, Michael Rasmussen wrote:
>
> Hi all,
>
> pkg search diskinfo
> pkg: Some repositories failed to respond appropriately:
> ms.omniti.com:
> http protocol error: code: 503 reason: Service Unavailable
> URL:
>
So odd thing about this server….there is NO “/dev/rdsk/c14t0d0s0” anywhere.
I wonder what that’s all about
On 1/4/17, 3:17 PM, "John Barfield" wrote:
You’re awesome! Its hanging on “open /dev/rdsk/c14t0d0s0”
Looking at that disk now.
On
Hi all,
pkg search diskinfo
pkg: Some repositories failed to respond appropriately:
ms.omniti.com:
http protocol error: code: 503 reason: Service Unavailable
URL:
'http://pkg.omniti.com/omniti-ms/ms.omniti.com/search/1/False_2_None_None_%3A%3A%3Adiskinfo'
--
Hilsen/Regards
Michael Rasmussen
Just as a data point we have a couple hosts running the LSI 9300-8e HBA to
JBODs on 151014 with no issues so far.
I was specifically wondering what is supported in the latest OmniOS
> release, but I suppose youre suggesting that I could backport an illumos
> driver from the latest build into
> On Jan 4, 2017, at 4:21 PM, Michael Rasmussen wrote:
>
> On Wed, 4 Jan 2017 15:56:17 -0500
> Dan McDonald wrote:
>
>>
>> (Note the get-rid-of-SunSSH before actually upgrading part.)
>>
> I guess this will be a prerequisite doing 151014 -> 151022 anyway?
smime.p7s
Description: S/MIME Cryptographic Signature
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss
For the record…here is the complete command output after it finally completed:
root@PRD-GIP-cpls-san1:/export/home/jbarfield# truss -t "open,ioctl" -f diskinfo
19752: open("/var/ld/ld.config", O_RDONLY) Err#2 ENOENT
19752: open("/lib/libdladm.so.1", O_RDONLY)= 3
19752:
On Wed, 4 Jan 2017 15:56:17 -0500
Dan McDonald wrote:
>
> (Note the get-rid-of-SunSSH before actually upgrading part.)
>
I guess this will be a prerequisite doing 151014 -> 151022 anyway?
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
You’re awesome! Its hanging on “open /dev/rdsk/c14t0d0s0”
Looking at that disk now.
On 1/4/17, 3:15 PM, "Joshua M. Clulow" wrote:
On 4 January 2017 at 13:09, John Barfield wrote:
> It actually finally completed. Doing it again with the
On 4 January 2017 at 13:09, John Barfield wrote:
> It actually finally completed. Doing it again with the proper PID as
> suggested provides the following:
> root@PRD-GIP-cpls-san1:/export/home/jbarfield# mdb -k
>> 0t19448::pid2proc | ::walk thread | ::findstack -v
>
one more thing…
> On Jan 4, 2017, at 10:29 AM, Richard Elling
> wrote:
>
>>
>> On Jan 4, 2017, at 10:04 AM, Chris Siebenmann wrote:
>>
>> We recently had a server reboot due to the ZFS vdev_deadman/spa_deadman
>> timeout timer
On 4 January 2017 at 12:57, John Barfield wrote:
> root@PRD-GIP-cpls-san1:/export/home/jbarfield# pstack 18919
> 18919: sudo diskinfo
> fee8e665 pollsys (8047b50, 2, 0, 0)
> fee264af pselect (6, 8089c30, 8089c50, fef06320, 0, 0) + 1bf
> fee267b8 select (6,
Hi Joshua,
As requested:
root@PRD-GIP-cpls-san1:/export/home/jbarfield# pstack 18919
18919: sudo diskinfo
fee8e665 pollsys (8047b50, 2, 0, 0)
fee264af pselect (6, 8089c30, 8089c50, fef06320, 0, 0) + 1bf
fee267b8 select (6, 8089c30, 8089c50, 0, 0, 0) + 8e
08055f64 sudo_execute (807c960,
Start with 014, get rid of SunSSH, and you should be able to Just Upgrade:
https://omnios.omniti.com/wiki.php/Upgrade_to_r151020
(Note the get-rid-of-SunSSH before actually upgrading part.)
Dan
___
OmniOS-discuss mailing list
On 4 January 2017 at 12:29, John Barfield wrote:
> I’ve got a SAN that seems to be timing out on any hardware probing commands
> such as “format” or “diskinfo” although prtconf seems to work.
>
> Does anyone happen to have a dtrace one liner or maybe kstat command I
I'm trying to update some omnios machines I use a fibre targets from 101514 to
current one step at a time.
In step one just trying to get to 16 I've already gotten stuck. Any ideas on
how to get the upgrade to proceed?
# cat /etc/release
OmniOS v11 r151014
Copyright 2015 OmniTI Computer
> On Jan 4, 2017, at 3:35 PM, John Barfield wrote:
>
> Are you suggesting that I buy a 9300 series or simply stating that we can
> move to that series soon in the future?
Buy a 9300 now. It's been working for ALL of our supported releases (i.e. 014
and later).
>
We will definitely use the latest omnios after everything is ordered.
Are you suggesting that I buy a 9300 series or simply stating that we can move
to that series soon in the future?
I am looking to start purchasing and building immediately. What is the release
timeline for 151022?
On
> On Jan 4, 2017, at 3:21 PM, John Barfield wrote:
>
> I was specifically wondering what is supported in the latest OmniOS release,
> but I suppose youre suggesting that I could backport an illumos driver from
> the latest build into ominos?
>
> OmniOS is the
Greetings again,
I’ve got a SAN that seems to be timing out on any hardware probing commands
such as “format” or “diskinfo” although prtconf seems to work.
Fmadm faulty shows a bad fan in the MD1200 JBOD that is connected and that is
all.
I’m trying to nail down the root but I’m hitting a
I was specifically wondering what is supported in the latest OmniOS release,
but I suppose youre suggesting that I could backport an illumos driver from the
latest build into ominos?
OmniOS is the distro I use for SAN deployments. I use SmartOS for hypervisors.
On 1/4/17, 2:18 PM, "Dan
> On Jan 4, 2017, at 3:02 PM, John Barfield wrote:
>
> Greetings,
>
> I’m designing a new ZFS SAN and I’m wondering if anyone knows the latest
> greatest supported HBA that I should use for external JBOD’s.
>
> We have tons of SAN’s in production using LSI HBAs
Greetings,
I’m designing a new ZFS SAN and I’m wondering if anyone knows the latest
greatest supported HBA that I should use for external JBOD’s.
We have tons of SAN’s in production using LSI HBAs today but I’m wondering if
there is anything new out there that I’m missing or not aware of yet.
> On Jan 4, 2017, at 10:04 AM, Chris Siebenmann wrote:
>
> We recently had a server reboot due to the ZFS vdev_deadman/spa_deadman
> timeout timer activating and panicing the system. If you haven't heard
> of this timer before, that's not surprising; triggering it requires
Hello all,
Is there any support for NFS 4.2 in Illumos? I am interested in the Sparse
File functionality that has been introduced.
Thanks!
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
On Wed, Jan 4, 2017 at 1:04 PM, Chris Siebenmann wrote:
> (I have a crash dump from this panic, so I can in theory use mdb
> to look through it to see just what level an IO appears stuck at
> if I know what to look for and how.)
Hi Chris,
I recently blogged about digging
We recently had a server reboot due to the ZFS vdev_deadman/spa_deadman
timeout timer activating and panicing the system. If you haven't heard
of this timer before, that's not surprising; triggering it requires an
IO to a vdev to take more than 1000 seconds (by default; it's controlled
by the
The first of the big changes for this bloody cycle is underway internally, i.e.
the move from Python2.6 to Python2.7 for OmniOS-internal users of Python.
Those users are currently:
pkg(5)
traditional installer media (not kayak)
As of now, the pkg5 test suite is mostly passing
It was a build accident, not intentional.
Next time kernels get updated, it'll be fixed. If this is breaking scripts,
I'll push an update that fixes it (at the cost of requiring another reboot).
I'm very sorry about it. It's my fault for making build environment
assumptions. A very recent
Morning,
Just finished updating servers after last week's update to r151020 and
noticed that the uname -v output has lost the revision information.
Before:
SunOS carolina 5.11 omnios-r151020-b5b8c75 i86pc i386 i86pc
After:
SunOS reaper 5.11 omnios-bed3013 i86pc i386 i86pc
Was that
35 matches
Mail list logo