Hi,
my problem occures on a 151a server with a large filesystem (12 TB) exported
via sun-cifs.
When doing a simple dir from a windows mount or a linux or even local
smbclient connection
I get sometimes a delay of about 20 seconds or sometimes an
NT_STATUS_IO_TIMEOUT or sometimes
an instant
Hi, after reading a lot, I understand that LACP aggregation won't let me run a
single
connection on all the available links, but just allocate one per connection.
So, considering this, and considering one NFS client mounting a share through
an aggregated
path, I was wondering if multiple files
On 10/20/11 06:24, Gabriele Bulfon wrote:
Hi, after reading a lot, I understand that LACP aggregation won't let me run
a single
connection on all the available links, but just allocate one per connection.
So, considering this, and considering one NFS client mounting a share through
an
On Wed, Oct 19, 2011 at 4:16 AM, Jonathan Adams t12nsloo...@gmail.com wrote:
did you set 'vboxnet0's IP address to 192.168.56.1/24 after you brought it up?
On 18 October 2011 22:12, Ron Parker rdpar...@gmail.com wrote:
...
After a 'ipadm create-if vboxnet0', it shows up in ifconfig's output
I'm using nwam, so I have the vboxnet0 in /etc/nwam/llp and
/etc/nwam/ncp-User.conf
On 20 October 2011 15:10, Ron Parker rdpar...@gmail.com wrote:
On Wed, Oct 19, 2011 at 4:16 AM, Jonathan Adams t12nsloo...@gmail.com wrote:
did you set 'vboxnet0's IP address to 192.168.56.1/24 after you brought
If you need multiple connections, use multiple IPs and/or host names.
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss
On Thu, Oct 20, 2011 at 9:30 AM, Jonathan Adams t12nsloo...@gmail.com wrote:
I'm using nwam, so I have the vboxnet0 in /etc/nwam/llp and
/etc/nwam/ncp-User.conf
Jonathan, Thanks. Due to your responses, I was able to get it working
either way, with ipadm or nwam.
The last thing I had to realize
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one
No they're just set up in DNS. After b134 (any flavor, OpenSolaris, Solaris
11 Express) it won't accept CIFS connections to DNS aliases.
Windows 2008 is the same way, but you can flip a registry setting to let SMB
Connections come in on DNS Aliases even if its not set as a SPN alias (the
more
Sorry I meant it won't accept DNS aliases at the same time.
On Thu, Oct 20, 2011 at 9:48 AM, Jonathan Leafty
jleafty+openindianadisc...@gmail.com wrote:
No they're just set up in DNS. After b134 (any flavor, OpenSolaris,
Solaris 11 Express) it won't accept CIFS connections to DNS aliases.
On Thu, 2011-10-20 at 18:44 +0200, Gernot Wolf wrote:
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this
I saw a comment it was fixed in Opensolaris snv_161 (I think) but you can't
get that.
It seems to work for me in OI 148, but I haven't tested it extensively yet,
but opening 2 windows on two different aliases work.
John
-Original Message-
From: Jonathan Leafty
Dontchya just love dtrace?
On 10/20/11 10:22 AM, Michael Stapleton
michael.staple...@techsologic.com wrote:
Hi Gernot,
You have a high context switch rate.
try
#dtrace -n 'sched:::off-cpu { @[execname]=count()}'
For a few seconds to see if you can get the name of and executable.
Mike
On
On Thu, 2011-10-20 at 19:34 +0200, Gernot Wolf wrote:
Wow, that was fast :)
Just caught me with the morning coffee email review.
However, the NIC integrated on the Intel DG965WHMKR mainbord is an Intel
82566DC according to the device driver utility, the reported driver
e1000g. Isn't the
Sched is the scheduler itself. How long did you let this run? If only
for a couple of seconds, then that number is high, but not ridiculous for
a loaded system, so I think that this output rules out a high context
switch rate.
Try this command to see if some process is making an excessive
That rules out userland.
Sched tells me that it is not a user process. If kernel code is
executing on a cpu, tools will report the sched process. The count was
how many times the process was taken off the CPU while dtrace was
running.
Lets see what kernel code is running the most:
#dtrace -n
Wow, that was fast :)
Just caught me with the morning coffee email review.
Well, I just had a nice dinner :)
However, the NIC integrated on the Intel DG965WHMKR mainbord is an Intel
82566DC according to the device driver utility, the reported driver
e1000g. Isn't the bge driver for
Gernot,
is there anything suspicious in /var/adm/messages?
Michael
On Thu, Oct 20, 2011 at 20:07, Michael Stapleton
michael.staple...@techsologic.com wrote:
That rules out userland.
Sched tells me that it is not a user process. If kernel code is
executing on a cpu, tools will report the
My understanding is that it is not supposed to be a loaded system. We
want to know what the load is.
gernot@tintenfass:~# intrstat 30
device | cpu0 %tim cpu1 %tim
-+--
e1000g#0 | 1 0,0 0 0,0
ehci#0 | 0
Don't know. I don't like to trouble shoot by guess if possible. I rather
follow the evidence to capture the culprit. Use what we know to discover
what we do not know.
We know CS rate in vmstat is high, we know Sys time is high, we know
syscall rate is low, we know it is not a user process
On Thu, Oct 20, 2011 at 20:33, Michael Stapleton
michael.staple...@techsologic.com wrote:
Don't know. I don't like to trouble shoot by guess if possible. I rather
follow the evidence to capture the culprit. Use what we know to discover
what we do not know.
if you're answering my question: I'm
I'd like to see a run of the script I sent earlier. I don't trust
intrstat (not for any particular reason, other than that I have never used
it)...
On 10/20/11 11:33 AM, Michael Stapleton
michael.staple...@techsologic.com wrote:
Don't know. I don't like to trouble shoot by guess if possible. I
You might be right.
But 45% of what?
Profiling interrupt: 5844 events in 30.123 seconds (194 events/sec)
Count indv cuml rcnt nsec Hottest CPU+PIL
Caller
---
2649 45% 45% 0.00 1070 cpu[1]
On Thu, Oct 20, 2011 at 20:55, Michael Stapleton
michael.staple...@techsologic.com wrote:
You might be right.
But 45% of what?
Profiling interrupt: 5844 events in 30.123 seconds (194 events/sec)
Count indv cuml rcnt nsec Hottest CPU+PIL
Caller
+1
Mike
On Thu, 2011-10-20 at 11:47 -0700, Rennie Allen wrote:
I'd like to see a run of the script I sent earlier. I don't trust
intrstat (not for any particular reason, other than that I have never used
it)...
On 10/20/11 11:33 AM, Michael Stapleton
michael.staple...@techsologic.com
Profiling is AFAIK statistical, so it might not show the correct number.
Certainly the count of interrupts does not appear high, but if the handler
is spending a long time in the interrupt...
The script I sent measures the time spent in the handler (intrstat might do
this as well, but I just
You mean, besides being quite huge? I took a quick look at it, but other
than getting a headache by doing that, my limited unix skills
unfortunately fail me.
I've zipped it an attached it to this mail, maybe someone can get
anything out of it...
Regards,
Gernot
Am 20.10.11 20:17, schrieb
Here are the results (let the script run for a few secs):
CPU IDFUNCTION:NAME
1 2 :END DEVICE TIME (ns)
i9151 22111
heci0 23119
pci-ide0 38700
uhci1
Ooops, something went wrong with my attachement. I'll try again...
Regards,
Gernot Wolf
Am 20.10.11 21:09, schrieb Gernot Wolf:
You mean, besides being quite huge? I took a quick look at it, but other
than getting a headache by doing that, my limited unix skills
unfortunately fail me.
I've
I let it run (as all the other dtrace commands you guys have given me)
just for a couple of seconds. And no, it's not a loaded system, that's
the problem here. It's just a home NAS...
Here is the dtrace output:
gernot@tintenfass:/root# dtrace -n 'syscall:::entry { @[execname]=count()}'
Gernot Wolf wrote:
Ok, for some reason this attachement refuses to go out :( Have to figure
that out...
Probably just because it's huge. Try tail -100 /var/adm/messages.
It's likely that if there's something going nuts on your system,
there'll be enough log-spam to identify it.
--
James
Nope. Cpu load remains the same. top shows:
CPU states: 47.5% idle, 0.0% user, 52.5% kernel, 0.0% iowait, 0.0% swap
Regards,
Gernot Wolf
Am 20.10.11 20:25, schrieb Michael Schuster:
Hi,
just found this:
http://download.oracle.com/docs/cd/E19253-01/820-5245/ghgoc/index.html
does it help?
Probably just too big.
Are there any ACPI settings in the BIOS?
or we can try to change ACPI in OI.
#man eeprom
.
.
.
OPERANDS
x86 Only
acpi-user-options
A configuration variable that controls the use of
Advanced Configuration and Power Interface (ACPI), a
Am 20.10.11 20:57, schrieb Michael Schuster:
On Thu, Oct 20, 2011 at 20:55, Michael Stapleton
michael.staple...@techsologic.com wrote:
You might be right.
But 45% of what?
Profiling interrupt: 5844 events in 30.123 seconds (194 events/sec)
Count indv cuml rcnt nsec Hottest CPU+PIL
Well, I zipped it, the zipfile is just 211K? Shouldn't be a problem, I
think...
Regards,
Gernot Wolf
Am 20.10.11 21:38, schrieb James Carlson:
Gernot Wolf wrote:
Ok, for some reason this attachement refuses to go out :( Have to figure
that out...
Probably just because it's huge. Try
Ok, here we go:
gernot@tintenfass:~# mdb -k
Loading modules: [ unix genunix specfs dtrace mac cpu.generic uppc
pcplusmp scsi_vhci zfs ip hook neti sockfs arp usba uhci s1394 fctl
stmf_sbd stmf idm fcip cpc random sata crypto sd lofs logindmux ptm ufs
sppp smbsrv nfs ipc ]
AcpiDbgLevel
I would not worry about it. The messages are being caused by some
problem. Lets focus on getting the messages.
Debug will increase your load, but not like you are seeing.
Mike
On Thu, 2011-10-20 at 22:10 +0200, Gernot Wolf wrote:
Ok, here we go:
gernot@tintenfass:~# mdb -k
Loading modules:
Is this running in a VM?
Mike
On Thu, 2011-10-20 at 22:20 +0200, Gernot Wolf wrote:
Grep output attached. Hopefully this attachement will go through ;)
Regards,
Gernot Wolf
Am 20.10.11 21:25, schrieb Michael Stapleton:
Attachment is missing...
I'd like to see the whole things,
Ok, I could not resist giving it a try, screw my bed ;)
Mike, bingo! That one hit home. With acpi-user-options set to 0x08 and
subsequent reboot cpu load is back to normal (that is load average=0.05).
I'll run my diagnostics again on my system and post the results in case
anyone is
39 matches
Mail list logo