Stable Air Rigging Systmes (Air Casters), Machinery moving skates, Toe Jack sta...@freebsd.org 07/24/2018

2018-07-23 Thread Finer Lifting Tools
Dear Stable Manager:
Good day. ^_^ How are you? 
Do you need Machinery moving skates, all kinds of Lifting Jack, Air Casters ?
Shan Dong Finer Lifting Tools co., LTD produce all kinds of moving tools 
professionally for more than 20 years with high and durable quality. 
Moving roller skates capacity from 2 tons to more than 1000 tons, can be 
customized as your demand. 
Hydraulic Lifting Jack lifting up your loads easily and save cost, durable for 
more than 5 years, Toe Jack can hold heavy loads on toe parts and the hear 
parts. 
Air Casters capacity from 10 tons to more than 60 tons, can be customized as 
demand. Air Bering Casters power source is compressed air, No-spark, safety and 
easy to operate, durable for more than ten years 
without quality problems. 
Call us, More details will be send to you. ^_^
Thanks in advance.
Waiting for you. 
Best Regards. 
Export Manager: Faith Jiang
Shan Dong Finer Lifting Tools co., LTD
sales (at) cargotrolley (dot) com
Tel:0086-18954718083
Website: 3w (dot) cargotrolley(dot) com
07/24/2018
U n s u b s c r i b e 
07/24/2018 10:53:09  
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


All the memory eaten away by ZFS 'solaris' malloc - on 11.1-R amd64

2018-07-23 Thread Mark Martinec

After upgrading an older AMD host from FreeBSD 10.3 to 11.1-RELEASE-p11
(amd64), ZFS is gradually eating up all memory, so that it crashes every
few days when the memory is completely exhausted (after swapping heavily
for a couple of hours).

This machine has only 4 GB of memory. After capping up the ZFS ARC
to 1.8 GB the machine can now stay up a bit longer, but in four days
all the memory is used up. The machine is lightly loaded, it runs
a bind resolver and a lightly used web server, the ps output
does not show any excessive memory use by any process.

During the last survival period I ran  vmstat -m  every second
and logged results. What caught my eye was the 'solaris' entry,
which seems to explain all the exhaustion.

The MemUse for the solaris entry starts modestly, e.g. after a few
hours of uptime:

$ vmstat -m :
 Type InUse MemUse HighUse Requests  Size(s)
  solaris 3141552 225178K   - 12066929  
16,32,64,128,256,512,1024,2048,4096,8192,16384,32768


... but this number keeps steadily growing.

After about four days, shortly before a crash, it grew to 2.5 GB,
which gets dangerously close to all the available memory:

  solaris 39359484 2652696K   - 234986296  
16,32,64,128,256,512,1024,2048,4096,8192,16384,32768


Plotting the 'solaris' MemUse entry vs. wall time in seconds, one can 
see
a steady linear growth, about 25 MB per hour. On a fine-resolution small 
scale

the step size seems to be one small step increase per about 6 seconds.
All steps are small, but not all are the same size.

The only thing (in my mind) that distinguishes this host from others
running 11.1 seems to be that one of the two ZFS pools is down because
its disk is broken. This is a scratch data pool, not otherwise in use.
The pool with the OS is healthy.

The syslog shows entries like the following periodically:

Jul 23 16:48:49 xxx ZFS: vdev state changed, 
pool_guid=15371508659919408885 vdev_guid=11732693005294113354
Jul 23 16:49:09 xxx ZFS: vdev state changed, 
pool_guid=15371508659919408885 vdev_guid=11732693005294113354
Jul 23 16:55:34 xxx ZFS: vdev state changed, 
pool_guid=15371508659919408885 vdev_guid=11732693005294113354


The 'zpool status -v' on this pool shows:

  pool: stuff
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-3C
  scan: none requested
config:

NAMESTATE READ WRITE CKSUM
stuff   UNAVAIL  0 0 0
  11732693005294113354  UNAVAIL  0 0 0  was /dev/da2


The same machine with this broken pool could previously survive 
indefinitely

under FreeBSD 10.3 .

So, could this be the reason for memory depletion?
Any fixes for that? Any more tests suggested to perform
before I try to get rid of this pool?

  Mark
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


FreeBSD CTL device data/id questions

2018-07-23 Thread Eugene M. Zheganin

Hi,

I have a bunch if the dumb (not sarcasm) questions concerning FreeBSD 
CTL layer and iSCSI target management:


- is the "FREEBSD CTLDISK 0001" line that the ctladm lunlist is 
presenting, and that the initiators are seeing as the hwardware id 
hardcoded somewhere, especially the "CTLDISK 0001" part ?


- I am able to change the "FREEBSD" part of the above, but only from the 
configuration file (ctl.conf), but not from the ctladm on the create 
stage (when I'm issuing -o vendor there's no error, and the vendor in 
the devlist changes but not on the lunlist, however, I see changed 
vendors in the lunlist and they come from the config).


- is the desire to change the "FREEBSD CTLDISK 0001" part weird ? I'm 
currently considering it as a part of the production maintenance, but 
I'm not sure. Is there a way to change it without touching the code ?



Thanks.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


[Bug 228536] x11/nvidia-driver: 11.2-BETA3 - fails to operate correctly

2018-07-23 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=228536

--- Comment #7 from Dave Cottlehuber  ---
Hi Paco! Welcome to FreeBSD. I used translate.google.com as I don't speak 
Spanish. I'll put some quick notes on how to build this package from source
here but if you have any trouble please open a conversation on
https://forums.freebsd.org/ there, in English if you can. Its
normal to use a new post or bug for a different issue.

This assumes you included src and ports during installation.

Hola Paco! Bienvenido a FreeBSD. Utilicé translate.google.com porque no hablo
Español. Voy a poner algunas notas rápidas sobre cómo construir este paquete
desde la fuente aquí, pero si tienes algún problema, abre una conversación
sobre https://forums.freebsd.org/ allí, en inglés si puedes. Sus es normal
usar una nueva publicación o error para un problema diferente.

Esto supone que incluyó src y puertos durante la instalación.

# as root
pkg delete nvidia-driver
cd /usr/ports/x11/nvidia-driver
make package install clean

I don't load drivers in /boot/loader.conf if possible, as it's easier to fix
issues if they are loaded later on.

# put this in /etc/rc.conf
kld_list="${kld_list} nvidia-modeset"

then reboot and good luck!

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"