Re: My experience with the official release of Sculpt TC

2018-06-22 Thread Norman Feske
Hi Guido,

thanks for the prompt feedback and the thorough testing! We might update
the downloadable image once we update our master branch next time.

> Other news:
> 1. I noticed that the debian iso-download on this 06-21 image was
> jittery, stopping the now-counter for a second every few MBs. Is that an
> effect of these patches?

I don't think so. Supposing you were testing this with the RAM fs, the
jittery effect may be caused by the successive expansion of the RAM file
system. Each time it runs out of RAM quota, it blocks until the sculpt
manager explicitly increases the quota. The current quota is displayed
as capacity value of the 'ram' storage item. When it stops for a moment,
you'll most likely see the value increase shortly afterward, before the
download continues.

> 
> 2. With the old (sculpt-tc) image, the debian download sometimes hang
> with what looked like a filesystem lockup. I could not save the
> config/deploy, nor exit vim. Other processes seemed unaffected. Is that
> lockup to be expected from the old code?

No. We rarely (I have seen it only twice in the last two months)
observed instabilities of the rump-based file-system server. Once this
happens, you may try closing the inspect window (by disabling all
'inspect' buttons), and then opening the inspect window for the ram fs
(which should not be affected). Then you can inspect the '/report/log'
for file-system-related error messages.

It would be cool to secure the log should encounter the problem again.

Cheers
Norman

-- 
Dr.-Ing. Norman Feske
Genode Labs

https://www.genode-labs.com · https://genode.org

Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth

___
Genode users mailing list
users@lists.genode.org
https://lists.genode.org/listinfo/users

Re: My experience with the official release of Sculpt TC

2018-06-22 Thread Guido Witmond
I forgot to sent this to the list for everyone, not just the spooks who 
tapped the line ;-)


On 06/21/2018 01:51 PM, Norman Feske wrote:

Hi Guido,


thanks for the report and the log. I have a faint suspicion.



could you please give the following image a try?

   http://genode.org/files/nfeske/sculpt-2018-06-21.img


Hi Norman,

Your hunch was correct, this image runs the expand process correctly on
my 'bad' stick. I ran it three times just to make sure, both in usb2 and
usb3 sockets. All good.

Other news:
1. I noticed that the debian iso-download on this 06-21 image was
jittery, stopping the now-counter for a second every few MBs. Is that an
effect of these patches?

2. With the old (sculpt-tc) image, the debian download sometimes hang
with what looked like a filesystem lockup. I could not save the
config/deploy, nor exit vim. Other processes seemed unaffected. Is that
lockup to be expected from the old code?

It could also have been a memory error I encountered with memtest. But
that was transient. I could not pinpoint it to a single DIMM, nor did it
occur any time after that. Perhaps this iron is getting old and flaky.

Anyway, thanks for the patch.

I've attached the log.

Cheers, Guido.




log-expand-ok-on-img-2018-06-21
Description: Binary data
___
Genode users mailing list
users@lists.genode.org
https://lists.genode.org/listinfo/users

*****SPAM***** Sculpt TC and multi core virtual machines

2018-06-22 Thread Duss Pirmin
Spam detection software, running on the system "obelix.duss.local",
has identified this incoming email as possible spam.  The original
message has been attached to this so you can view it or label
similar future email.  If you have any questions, see
root@localhost for details.

Content preview:  Hello list I have played around with the sculpt TC a lot 
recently,
   and like it. One thing that I'm curious about is if I do a parallel build
   inside my virtual machine, that is configured to have two cores, both [...]
   

Content analysis details:   (8.7 points, 5.5 required)

 pts rule name  description
 -- --
 0.2 CK_HELO_GENERICRelay used name indicative of a Dynamic Pool or
Generic rPTR
 2.9 HELO_DYNAMIC_SPLIT_IP  Relay HELO'd using suspicious hostname (Split
IP)
 0.0 RCVD_IN_SORBS_DUL  RBL: SORBS: sent directly from dynamic IP address
[83.77.14.147 listed in dnsbl.sorbs.net]
 3.6 RCVD_IN_PBLRBL: Received via a relay in Spamhaus PBL
[83.77.14.147 listed in zen.spamhaus.org]
 1.6 RCVD_IN_BRBL_LASTEXT   RBL: No description available.
[83.77.14.147 listed in bb.barracudacentral.org]
 0.4 RDNS_DYNAMIC   Delivered to internal network by host with
dynamic-looking rDNS


--- Begin Message ---

Hello list

I have played around with the sculpt TC a lot recently, and like it.
One thing that I'm curious about is if I do a parallel build inside my  
virtual machine, that is configured to have two cores, both vCPU  
threads run on the same core/hyperthread according to top_view  
component.

Most of the time only CPU 0.0 has any work assigned.

I suppose by adding affinity-space and affinity nodes in to the  
configuration it would be possible to assign a component to a specific  
core.
I'm wondering, if there would be a negative impact on machines with  
more/less cores if I add the affinity specifications?


Is there a possibility to know which hyperthreads run on the same core?


best regards, Pirmin


--- End Message ---
___
Genode users mailing list
users@lists.genode.org
https://lists.genode.org/listinfo/users