Hi all,
An hypervisor used to host slaves for the continuous integration service is
reaching its maximum storage capacity (primary storage used for running Virtual
Machines).
To avoid VM corruptions, we preventively stopped VMs running on this hypervisor.
For the pharo ecosystem, following VMs
Could we not use on of the mac machine lying around to have our own space?
On Aug 22, 2013, at 2:15 PM, Christophe Demarey christophe.dema...@inria.fr
wrote:
Hi all,
An hypervisor used to host slaves for the continuous integration service is
reaching its maximum storage capacity (primary
Esteban,
On 19 Aug 2013, at 13:24, Esteban Lorenzano esteba...@gmail.com wrote:
PharoContributor new
name: 'Esteban Lorenzano';
id: 'estebanlm';
email: 'esteba...@gmail.com';
website: 'http://smallworks.eu';
description: 'Pharo core team. Contributor of
We can also add a mac slave in our offices. We just have one for pharo /
pharo-contribution / rmod.
Le 22 août 2013 à 16:24, Stéphane Ducasse a écrit :
Could we not use on of the mac machine lying around to have our own space?
On Aug 22, 2013, at 2:15 PM, Christophe Demarey
Martin,
On 19 Aug 2013, at 18:16, Martin Dias tinchod...@gmail.com wrote:
PharoContributor new
id: 'tinchodias'
name: 'Martín Dias';
email: 'tinchod...@gmail.com';
description: 'Master at UBA. PhD student at INRIA. Contributor in Fuel
project and in Pharo in general.';
yourself
Great, my gravatar image is fine :)
Thanks a lot!
On 22 August 2013 11:54, Sven Van Caekenberghe s...@stfx.eu wrote:
Clara,
On 19 Aug 2013, at 14:13, Clara Allende clari.alle...@gmail.com wrote:
PharoContributor new
name: 'Clara Allende';
id: 'ClaraAllende';
Hi -
The plain Pharo 20619 + RFB image in my dropbox here:
https://dl.dropboxusercontent.com/u/4460862/pharo2RFB.zip freezes when
you save it while the RFB server is running. The freeze occurs in the
#snapshotPrimitive.
This is the VM info I'm using:
3.9-7 #1 Wed Mar 13 18:22:44 CET 2013 gcc
Paul DeBruicker wrote
In this instance, that doesn't output anything. Specifically:
$ ps -A | grep pharo
6001 pts/000:00:45 pharo
$ kill -s SIGUSR1 6001
$
Oh no wait. I'm an idiot. It spits out this in the terminal where the
pharo process is running:
stack page bytes 4096
On 22 Aug 2013, at 19:34, Paul DeBruicker pdebr...@gmail.com wrote:
Hi -
The plain Pharo 20619 + RFB image in my dropbox here:
https://dl.dropboxusercontent.com/u/4460862/pharo2RFB.zip freezes when
you save it while the RFB server is running. The freeze occurs in the
#snapshotPrimitive.
In this instance, that doesn't output anything. Specifically:
$ ps -A | grep pharo
6001 pts/000:00:45 pharo
$ kill -s SIGUSR1 6001
$
Mariano Martinez Peck wrote
If you run the VM from command line and you send a kill -s SIGUSR1 ... it
should display the stacktrace of the VM in
hi henrik
do you have an implementation for when:do:for: because I do not see it?
https://pharo.fogbugz.com/default.asp?11316
Stef
#when:do: is there already ;)
even better then :)
#when:send:to: and #when:do:for: should be added though.
what would be when:do:for: ?
Its
Sven Van Caekenberghe-2 wrote
On 22 Aug 2013, at 19:34, Paul DeBruicker lt;
pdebruic@
gt; wrote:
Hi -
The plain Pharo 20619 + RFB image in my dropbox here:
https://dl.dropboxusercontent.com/u/4460862/pharo2RFB.zip freezes when
you save it while the RFB server is running. The freeze
looks quite healthy to me. image is idle doing nothing.
On 22 August 2013 20:21, Paul DeBruicker pdebr...@gmail.com wrote:
Paul DeBruicker wrote
In this instance, that doesn't output anything. Specifically:
$ ps -A | grep pharo
6001 pts/000:00:45 pharo
$ kill -s SIGUSR1 6001
So when you open the image I posted and in the workspace run
RFBServer start.
Smalltalk snapshot: true andQuit: false.
Everything works fine? It doesn't go to 100% cpu use?
--
View this message in context:
Hi,
try this:
This will start opening infinite number of debuggers, because of
AdoesNotUnderstand: #halt.
Ok, so, if I replace /self halt/ with /Halt signal/, it will do the same,
but now because of AdoesNotUnderstand: #inspector.
But in both cases, after successfully (before stack is full)
15 matches
Mail list logo