[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13471641#comment-13471641
 ] 

Caleb Call commented on CLOUDSTACK-105:
---------------------------------------

Our servers are all in bridge mode (per the setup instructions in the 
Cloudstack setup guide).  

[root@cloud-hv01 log]# cat /etc/xensource/network.conf 
bridge
[root@cloud-hv01 log]# 

I also noticed in /var/log/messages, the following are being logged over and 
over.  Looks like this is something related to openvswitch.

Oct  8 10:06:54 cloud-hv01 syslogd 1.4.1: restart.
Oct  8 10:06:57 cloud-hv01 ovs-vsctl: 
1568403|stream_unix|ERR|/tmp/stream-unix.9679.784180: connection to 
/var/run/openvswitch/db.sock failed: No such file or directory
Oct  8 10:06:57 cloud-hv01 ovs-vsctl: 
1568404|reconnect|WARN|unix:/var/run/openvswitch/db.sock: connection attempt 
failed (No such file or directory)
Oct  8 10:07:05 cloud-hv01 ovs-vsctl: 
1568405|stream_unix|ERR|/tmp/stream-unix.9679.784181: connection to 
/var/run/openvswitch/db.sock failed: No such file or directory
Oct  8 10:07:05 cloud-hv01 ovs-vsctl: 
1568406|reconnect|WARN|unix:/var/run/openvswitch/db.sock: connection attempt 
failed (No such file or directory)
Oct  8 10:07:13 cloud-hv01 ovs-vsctl: 
1568407|stream_unix|ERR|/tmp/stream-unix.9679.784182: connection to 
/var/run/openvswitch/db.sock failed: No such file or directory
Oct  8 10:07:13 cloud-hv01 ovs-vsctl: 
1568408|reconnect|WARN|unix:/var/run/openvswitch/db.sock: connection attempt 
failed (No such file or directory)

I've also noticed this same behavior of leaving these stale socket files behind 
on a completely different Cloudstack cluster using Xenserver in bridge mode.
                
> /tmp/stream-unix.####.###### stale sockets causing inodes to run out on 
> Xenserver
> ---------------------------------------------------------------------------------
>
>                 Key: CLOUDSTACK-105
>                 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-105
>             Project: CloudStack
>          Issue Type: Bug
>          Components: XenServer
>    Affects Versions: pre-4.0.0
>         Environment: Xenserver 6.0.2
> Cloudstack 3.0.2
>            Reporter: Caleb Call
>            Assignee: Devdeep Singh
>             Fix For: 4.1.0
>
>
> We came across an interesting issue in one of our clusters.  We ran out of 
> inodes on all of our cluster members (since when does this happen in 2012?).  
> When this happened, it in turn made the / filesystem a read-only filesystem 
> which in turn made all the hosts go in to emergency maintenance mode and as a 
> result get marked down by Cloudstack.  We found that it was caused by 
> hundreds of thousands of stale socket files in /tmp named 
> "stream-unix.####.######".  To resolve the issue, we had to delete those 
> stale socket files (find /tmp -name "*stream*" -mtime +7 -exec rm -v {} \;), 
> then kill and restart xapi, then correct the emergency maintenance mode.  
> These hosts had only been up for 45 days before this issue occurred.  
> In our scouring of the interwebs, the only other instance we've been able to 
> find of this (or similar) happening is in the same setup we are currently 
> running. Xenserver 6.0.2 with CS 3.0.2.  Do these stream-unix sockets have 
> anything to do with Cloudstack?  I would think if this was a Xenserver issue 
> (bug), there would be a lot more on the internet about this happening.  For a 
> temporary workaround, we've added a cronjob to cleanup these files but we'd 
> really like to address the actual issue that's causing these sockets to 
> become stale and not get cleaned-up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to