> On May 13, 2015, at 6:45 PM, Matthew McGee wrote:
>
> Interesting. Using the trailing "." for an absolute FQDN works.
> Any hints on how to make it work without the full FQDN?
> I assume it's probably a kerberos related issue?
I'd suggest asking the illumos mailing list (discussion or develop
Interesting. Using the trailing "." for an absolute FQDN works.
Any hints on how to make it work without the full FQDN?
I assume it's probably a kerberos related issue?
On Wed, May 13, 2015 at 8:10 AM, Dominik Hassler wrote:
Did you try to end your FQDN with a trailing dot?
>
> like: 'DATA.HOME.
Well, don't forget, my latest tests were w/ KWMs running inside zones.
As Dan pointed out today in another thread, the lack of VND upstream
might have a bigger impact on KVMs running inside zones.
On 05/13/2015 10:13 PM, Michael Rasmussen wrote:
On Wed, 13 May 2015 14:28:22 -0400
Dan McDonald
On Wed, 13 May 2015 14:28:22 -0400
Dan McDonald wrote:
>
> Tobi's sheet has a preliminary version. Not sure if he's tested with the one
> that actually is in the repo servers now.
>
> ALSO, 012 got the perf fix because it was easier to bring that along for the
> ride instead of addressing VE
I’ve been running an all-ssd setup on a Dell R720, with dual 9207-8i cards
connected to dual 8x2.5 disk backplane. (9207-8i is one of the only cards that
doesn’t interfere with the BIOS, as dell Implemented it for Tape Drive
Support). Boot disks are hooked up internally connected to the onboard
I've applied yesterday's kvm performance patch, did performance tests and
posted the results in tobi's sheet.
Sent from my Samsung device
Original message
From: Dan McDonald
Date: 13/05/2015 20:28 (GMT+01:00)
To: Michael Rasmussen
Cc: omnios-discuss@lists.omniti.c
> On May 13, 2015, at 2:14 PM, Michael Rasmussen wrote:
>
> Has someone made performance test with the patched kvm package?
Tobi's sheet has a preliminary version. Not sure if he's tested with the one
that actually is in the repo servers now.
ALSO, 012 got the perf fix because it was easier
On Tue, 12 May 2015 14:59:02 -0400
Dan McDonald wrote:
>
> I chose option #2:
>
>
> https://github.com/omniti-labs/omnios-build/commit/0268a2ff04b1cbed2324054cb97a0f36c58989b0
>
> There's now an update for r151014 that has the updated system/kvm
> (qemu/userland) and driver/virtualizat
Some of you probably have been tracking VENOM (aka. CVE-2015-3456).
I have patched the qemu that OmniOS's KVM uses with a VENOM fix and pushed
updates on to the repo servers. Source people can consult:
https://github.com/joyent/illumos-kvm-cmd/commit/407546e5132f54065f3f78ac293ad7a8d16
I ran into the same issue when setting up my home server. Access to CIFS
works by IP but not name. I ended up setting up a second IP address and
created a DNS entry with a different name for that IP. I have no idea why
it works but it does.
Aaron
On Wed, May 13, 2015 at 6:10 AM, Dominik Hassler
> On May 13, 2015, at 5:02 AM, Dominik Hassler wrote:
>
> Any ideas why it only affects virtio nics and when the KVM is in a zone? Any
> ideas how to improve it?
I'm not 100% sure, but I suspect it has to do with the fact that KVM needs to
put the vnic/nic into promiscuous mode. In a zone, t
Did you try to end your FQDN with a trailing dot?
like: 'DATA.HOME.example.net.' in your example?
Gesendet: Mittwoch, 13. Mai 2015 um 13:40 Uhr
Von: "Matthew McGee"
An: omnios-discuss@lists.omniti.com
Betreff: [OmniOS-discuss] CIFS Issues
I am attempting to migrate my CIFS shares from FreeNAS
I am attempting to migrate my CIFS shares from FreeNAS to OmniOS.
I have attempted a number of different installs and for now I am working in
a VM
for speed of reboots and testing.
I have Windows 2012 AD, and a number of Mac OSX & Windows 7 clients.
Server name = DATA
Domain HOME.example.net
I i
Matthew,
I have 'Intel I350' nics. It is not about virtio performance in general but the
difference whether the *same* KVM runs in the GZ or in a NGZ.
> Gesendet: Mittwoch, 13. Mai 2015 um 11:11 Uhr
> Von: "Matthew Lagoe"
> An: "'Dominik Hassler'" , omnios-discuss@lists.omniti.com
> Betreff: RE
Some nic's don’t handle the virtio stuff very well (myricom im looking at you)
so that could be part of the problem
Intel typically is pretty good about it however so the e1000's working doesn’t
surprise me.
What nics are you specifically having issues with that have the extra delay?
-Orig
Hi,
I am running my KVMs in individual zones and seeing an increased ping rtt by a
factor of approx. 7 compared to ping rtt when running the same KVM inside the
GZ (cf. attached smokeping chart).
This does *only* affect virtio nics but not e1000 nics. For e1000 nics the ping
rtt remains the sa
16 matches
Mail list logo