Re: Networking Support in VirtualBox

2017-07-27 Thread Christian Helmuth
Chris,

some annotations of your log below that may help.

On Wed, Jul 26, 2017 at 09:58:11AM -0400, Chris Rothrock wrote:
> NOVA Microhypervisor v7-2006635 (x86_64): Jul 25 2017 11:23:13 [gcc 6.3.0]
> [MBI]
> 
> [ 0] TSC:340 kHz BUS:0 kHz DL
> [ 0] CORE:0:0:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
> [ 3] CORE:0:3:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
> [ 2] CORE:0:2:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
> [ 1] CORE:0:1:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
> [ 0] disabling super pages for DMAR
> Hypervisor features VMX
> Hypervisor reports 4x1 CPUs
> CPU ID (genode->kernel:package:core:thread) remapping
>  remap (0->0:0:0:0) boot cpu
>  remap (1->1:0:1:0)
>  remap (2->2:0:2:0)
>  remap (3->3:0:3:0)
> Hypervisor info page contains 26 memory descriptors:
> core image  [0010,02682000)
> binaries region [00226000,02682000) free for reuse
> detected physical memory: 0x - size: 0x0008ec00
> use  physical memory: 0x - size: 0x0008e000
> detected physical memory: 0x0010 - size: 0xb2bfb000
> use  physical memory: 0x0010 - size: 0xb2bfb000
> detected physical memory: 0xb3aff000 - size: 0x1000
> use  physical memory: 0xb3aff000 - size: 0x1000
> detected physical memory: 0x0001 - size: 0x00013f80
> use  physical memory: 0x0001 - size: 0x00013f80
> :virt_alloc: Allocator 0x1e76f0 dump:
>  Block: [2000,3000) size=4K avail=0 max_avail=0
>  Block: [3000,4000) size=4K avail=0 max_avail=0
>  Block: [4000,5000) size=4K avail=0 max_avail=0
>  Block: [5000,6000) size=4K avail=0 max_avail=0
>  Block: [6000,7000) size=4K avail=0 max_avail=0
>  Block: [7000,8000) size=4K avail=0 max_avail=0
>  Block: [8000,9000) size=4K avail=0 max_avail=0
>  Block: [9000,a000) size=4K avail=0 max_avail=0
>  Block: [a000,b000) size=4K avail=0 max_avail=0
>  Block: [b000,c000) size=4K avail=0 max_avail=0
>  Block: [c000,d000) size=4K avail=0 max_avail=0
>  Block: [d000,e000) size=4K avail=0 max_avail=0
>  Block: [e000,f000) size=4K avail=0 max_avail=0
>  Block: [f000,0001) size=4K avail=0 max_avail=0
>  Block: [0001,00011000) size=4K avail=0 max_avail=0
>  Block: [00011000,00012000) size=4K avail=0 max_avail=0
>  Block: [00012000,00013000) size=4K avail=0 max_avail=0
>  Block: [00013000,00014000) size=4K avail=0
> max_avail=137434760164K
>  Block: [00014000,00015000) size=4K avail=0 max_avail=0
>  Block: [00015000,00016000) size=4K avail=0 max_avail=0
>  Block: [00016000,00017000) size=4K avail=0 max_avail=0
>  Block: [00017000,00018000) size=4K avail=0 max_avail=0
>  Block: [00018000,00019000) size=4K avail=0 max_avail=0
>  Block: [00019000,0001a000) size=4K avail=0 max_avail=908K
>  Block: [0001a000,0001b000) size=4K avail=0 max_avail=0
>  Block: [0001b000,0001c000) size=4K avail=0 max_avail=908K
>  Block: [0001c000,0001d000) size=4K avail=0 max_avail=0
>  Block: [0001d000,0010) size=908K avail=908K
> max_avail=908K
>  Block: [00226000,00227000) size=4K avail=0 max_avail=0
>  Block: [00227000,00228000) size=4K avail=0
> max_avail=137434760164K
>  Block: [00228000,00229000) size=4K avail=0 max_avail=0
>  Block: [00229000,a000) size=2619228K avail=2619228K
> max_avail=2619228K
>  Block: [b000,bfeff000) size=261116K avail=261116K
> max_avail=137434760164K
>  Block: [bff04000,7fffbfffd000) size=137434760164K
> avail=137434760164K max_avail=137434760164K
>  => mem_size=140736144932864 (134216446 MB) / mem_avail=140736144809984
> (134216446 MB)
> 
> :phys_alloc: Allocator 0x1e6620 dump:
>  Block: [1000,2000) size=4K avail=0 max_avail=0
>  Block: [2000,3000) size=4K avail=0 max_avail=0
>  Block: [3000,4000) size=4K avail=0 max_avail=0
>  Block: [4000,5000) size=4K avail=0 max_avail=0
>  Block: [5000,6000) size=4K avail=0 max_avail=0
>  Block: [6000,7000) size=4K avail=0 max_avail=0
>  Block: [7000,8000) size=4K avail=0 max_avail=0
>  Block: [8000,9000) size=4K avail=0 max_avail=0
>  Block: 

Re: Networking Support in VirtualBox

2017-07-26 Thread Chris Rothrock
I have pulled the latest commit and have a new build created (still giving
the same condition with no video on either VM window but GUI still works).
The serial log output is below.  Several items stand out that I can see.
Assertion failed from [init -> vbox1] EMT,[init -> vbox2] EMT, and NAT
driver is missing.  This would stand to reason why the vboxes are failing
if the NAT has no driver.  The virtualbox run recipe has the nic_bridge
enabled (because use_net and use_gui are both enabled) so for some reason
there appears to be a missing component where it comes to the NAT driver.



NOVA Microhypervisor v7-2006635 (x86_64): Jul 25 2017 11:23:13 [gcc 6.3.0]
[MBI]

[ 0] TSC:340 kHz BUS:0 kHz DL
[ 0] CORE:0:0:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
[ 3] CORE:0:3:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
[ 2] CORE:0:2:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
[ 1] CORE:0:1:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
[ 0] disabling super pages for DMAR
Hypervisor features VMX
Hypervisor reports 4x1 CPUs
CPU ID (genode->kernel:package:core:thread) remapping
 remap (0->0:0:0:0) boot cpu
 remap (1->1:0:1:0)
 remap (2->2:0:2:0)
 remap (3->3:0:3:0)
Hypervisor info page contains 26 memory descriptors:
core image  [0010,02682000)
binaries region [00226000,02682000) free for reuse
detected physical memory: 0x - size: 0x0008ec00
use  physical memory: 0x - size: 0x0008e000
detected physical memory: 0x0010 - size: 0xb2bfb000
use  physical memory: 0x0010 - size: 0xb2bfb000
detected physical memory: 0xb3aff000 - size: 0x1000
use  physical memory: 0xb3aff000 - size: 0x1000
detected physical memory: 0x0001 - size: 0x00013f80
use  physical memory: 0x0001 - size: 0x00013f80
:virt_alloc: Allocator 0x1e76f0 dump:
 Block: [2000,3000) size=4K avail=0 max_avail=0
 Block: [3000,4000) size=4K avail=0 max_avail=0
 Block: [4000,5000) size=4K avail=0 max_avail=0
 Block: [5000,6000) size=4K avail=0 max_avail=0
 Block: [6000,7000) size=4K avail=0 max_avail=0
 Block: [7000,8000) size=4K avail=0 max_avail=0
 Block: [8000,9000) size=4K avail=0 max_avail=0
 Block: [9000,a000) size=4K avail=0 max_avail=0
 Block: [a000,b000) size=4K avail=0 max_avail=0
 Block: [b000,c000) size=4K avail=0 max_avail=0
 Block: [c000,d000) size=4K avail=0 max_avail=0
 Block: [d000,e000) size=4K avail=0 max_avail=0
 Block: [e000,f000) size=4K avail=0 max_avail=0
 Block: [f000,0001) size=4K avail=0 max_avail=0
 Block: [0001,00011000) size=4K avail=0 max_avail=0
 Block: [00011000,00012000) size=4K avail=0 max_avail=0
 Block: [00012000,00013000) size=4K avail=0 max_avail=0
 Block: [00013000,00014000) size=4K avail=0
max_avail=137434760164K
 Block: [00014000,00015000) size=4K avail=0 max_avail=0
 Block: [00015000,00016000) size=4K avail=0 max_avail=0
 Block: [00016000,00017000) size=4K avail=0 max_avail=0
 Block: [00017000,00018000) size=4K avail=0 max_avail=0
 Block: [00018000,00019000) size=4K avail=0 max_avail=0
 Block: [00019000,0001a000) size=4K avail=0 max_avail=908K
 Block: [0001a000,0001b000) size=4K avail=0 max_avail=0
 Block: [0001b000,0001c000) size=4K avail=0 max_avail=908K
 Block: [0001c000,0001d000) size=4K avail=0 max_avail=0
 Block: [0001d000,0010) size=908K avail=908K
max_avail=908K
 Block: [00226000,00227000) size=4K avail=0 max_avail=0
 Block: [00227000,00228000) size=4K avail=0
max_avail=137434760164K
 Block: [00228000,00229000) size=4K avail=0 max_avail=0
 Block: [00229000,a000) size=2619228K avail=2619228K
max_avail=2619228K
 Block: [b000,bfeff000) size=261116K avail=261116K
max_avail=137434760164K
 Block: [bff04000,7fffbfffd000) size=137434760164K
avail=137434760164K max_avail=137434760164K
 => mem_size=140736144932864 (134216446 MB) / mem_avail=140736144809984
(134216446 MB)

:phys_alloc: Allocator 0x1e6620 dump:
 Block: [1000,2000) size=4K avail=0 max_avail=0
 Block: [2000,3000) size=4K avail=0 max_avail=0
 Block: [3000,4000) size=4K avail=0 max_avail=0
 Block: [4000,5000) size=4K avail=0 max_avail=0
 Block: 

Re: Networking Support in VirtualBox

2017-07-25 Thread Chris Rothrock
The top entry from git log:

commit 5e3e8073467628cd2a88fc1025be9a157f976e57
Author: Christian Helmuth 
Date:   Wed May 31 16:05:53 2017 +0200

version: 17.05

This is the version pulled when I started with a fresh environment with the
command:

git clone https://github.com/genodelabs/genode

When I do a fresh pull, should this pull the latest commit, of just to the
base version (17.05 in this case)?


On Tue, Jul 25, 2017 at 10:44 AM, Christian Helmuth <
christian.helm...@genode-labs.com> wrote:

> Chris,
>
> On Tue, Jul 25, 2017 at 09:46:52AM -0400, Chris Rothrock wrote:
> > The run recipe I am using is virtualbox.run.  In this recipe there is no
> > indications that the ACPI has any configurable components.  These
> > capabilities must be set in another file that this recipe is calling -
> > please tell me where these would be found so that I can adjust the caps
> on
> > acpi.
>
> The capability configuration is factored out into
> repos/base/run/platform_drv.inc. You may change the acpi_drv caps in
> line 133
>
>   
>
> Which Genode branch/version/commit hash are you using? I never
> experienced a log like yours where one and the same log contents
> appear three times pasted over each other.
>
> --
> Christian Helmuth
> Genode Labs
>
> https://www.genode-labs.com/ · https://genode.org/
> https://twitter.com/GenodeLabs · /ˈdʒiː.nəʊd/
>
> Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden
> Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> genode-main mailing list
> genode-main@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/genode-main
>



-- 


Thank You,

Chris Rothrock
Senior System Administrator
(315) 308-1637
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
genode-main mailing list
genode-main@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/genode-main


Re: Networking Support in VirtualBox

2017-07-25 Thread Christian Helmuth
Chris,

On Tue, Jul 25, 2017 at 09:46:52AM -0400, Chris Rothrock wrote:
> The run recipe I am using is virtualbox.run.  In this recipe there is no
> indications that the ACPI has any configurable components.  These
> capabilities must be set in another file that this recipe is calling -
> please tell me where these would be found so that I can adjust the caps on
> acpi.

The capability configuration is factored out into
repos/base/run/platform_drv.inc. You may change the acpi_drv caps in
line 133

  

Which Genode branch/version/commit hash are you using? I never
experienced a log like yours where one and the same log contents
appear three times pasted over each other.

-- 
Christian Helmuth
Genode Labs

https://www.genode-labs.com/ · https://genode.org/
https://twitter.com/GenodeLabs · /ˈdʒiː.nəʊd/

Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
genode-main mailing list
genode-main@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/genode-main


Re: Networking Support in VirtualBox

2017-07-25 Thread Chris Rothrock
There was no arbitrary concatenation at all - this was the entirety of the
log as it was transmitted across the serial port from the moment the Genode
boot started with the NOVA kernel.  If any concat was done, it was due to
the unmodified code provided with the source.

The 32 bit scenario was intended but next time around I will also try on
the 64 bit just to compare logs.

I am unclear as to why this would be even attempting to use more than 4
GB.  Yes, the PC has more than 4 and I have made no modifications to even
attempt to use more than 4 GB.  Again, if this is unintentional, it was not
by my design and how can we force it to use only up to the 4 GB boundary?

The run recipe I am using is virtualbox.run.  In this recipe there is no
indications that the ACPI has any configurable components.  These
capabilities must be set in another file that this recipe is calling -
please tell me where these would be found so that I can adjust the caps on
acpi.



On Tue, Jul 25, 2017 at 5:55 AM, Christian Helmuth <
christian.helm...@genode-labs.com> wrote:

> Hello Chris,
>
> your log is very hard to read as it seems to arbitrarily concatenate
> several logs or multiple copies of the same log. I suggest you store
> one log file per boot of the test machine and attach the resulting
> file to your email in the future.
>
> I'll add some comments about what I read in your log in the following
> (interleaved with the original/cleaned up log text).
>
> On Mon, Jul 24, 2017 at 08:43:51PM -0400, Chris Rothrock wrote:
> > NOVA Microhypervisor v7-8bcd6fc (x86_32): Jul 21 2017 10:14:19 [gcc
> 6.3.0]
>
> You seem to run a 32-bit build of NOVA, which may not interfere with
> your scenario but may not be what you intended to do.
>
> > [ 0] TSC:3408373 kHz BUS:0 kHz
> > [ 0] CORE:0:0:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
> > [ 1] CORE:0:1:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
> > [ 3] CORE:0:3:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
> > [ 2] CORE:0:2:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
> > [ 0] disabling super pages for DMAR
> > Hypervisor features VMX
> > Hypervisor reports 4x1 CPUs
> > CPU ID (genode->kernel:package:core:thread) remapping
> >  remap (0->0:0:0:0) boot cpu
> >  remap (1->1:0:1:0)
> >  remap (2->2:0:2:0)
> >  remap (3->3:0:3:0)
> > Hypervisor info page contains 24 memory descriptors:
> > core image  [0010,029b9000)
> > binaries region [00228000,029b9000) free for reuse
> > detected physical memory: 0x - size: 0x0008ec00
> > use  physical memory: 0x - size: 0x0008e000
> > detected physical memory: 0x0010 - size: 0xb2bfb000
> > use  physical memory: 0x0010 - size: 0xb2bfb000
> > detected physical memory: 0xb3aff000 - size: 0x1000
> > use  physical memory: 0xb3aff000 - size: 0x1000
> > detected physical memory: 0x - size: 0x3f80
>
> NOVA detects some unusable RAM above 4 GiB.
>
> > :virt_alloc: Allocator 0x1f5f84 dump:
> >  Block: [2000,3000) size=4K avail=0 max_avail=0
> >  Block: [3000,4000) size=4K avail=0 max_avail=0
> >  Block: [4000,5000) size=4K avail=0 max_avail=0
> >  Block: [5000,6000) size=4K avail=0 max_avail=0
> >  Block: [6000,7000) size=4K avail=0 max_avail=0
> >  Block: [7000,8000) size=4K avail=0 max_avail=0
> >  Block: [8000,9000) size=4K avail=0 max_avail=0
> >  Block: [9000,a000) size=4K avail=0 max_avail=0
> >  Block: [a000,b000) size=4K avail=0 max_avail=0
> >  Block: [b000,c000) size=4K avail=0 max_avail=0
> >  Block: [c000,d000) size=4K avail=0 max_avail=0
> >  Block: [d000,e000) size=4K avail=0 max_avail=0
> >  Block: [e000,f000) size=4K avail=0 max_avail=0
> >  Block: [f000,0001) size=4K avail=0 max_avail=0
> >  Block: [0001,00011000) size=4K avail=0 max_avail=0
> >  Block: [00011000,00012000) size=4K avail=0 max_avail=0
> >  Block: [00012000,00013000) size=4K avail=0 max_avail=0
> >  Block: [00013000,00014000) size=4K avail=0 max_avail=2619220K
> >  Block: [00014000,00015000) size=4K avail=0 max_avail=0
> >  Block: [00015000,00016000) size=4K avail=0 max_avail=0
> >  Block: [00016000,00017000) size=4K avail=0 max_avail=0
> >  Block: [00017000,00018000) size=4K avail=0 max_avail=0
> >  Block: [00018000,00019000) size=4K avail=0 max_avail=0
> >  Block: [00019000,0001a000) size=4K avail=0 max_avail=908K
> >  Block: [0001a000,0001b000) size=4K avail=0 max_avail=0
> >  Block: [0001b000,0001c000) size=4K avail=0 max_avail=908K
> >  Block: [0001c000,0001d000) size=4K avail=0 max_avail=0
> >  Block: [0001d000,0010) size=908K avail=908K max_avail=908K
> >  Block: [00228000,00229000) size=4K avail=0 max_avail=0
> >  Block: [00229000,0022a000) size=4K avail=0 max_avail=2619220K
> >  Block: [0022a000,0022b000) size=4K avail=0 max_avail=0
> >  Block: [0022b000,a000) size=2619220K avail=2619220K
> max_avail=2619220K
> >  Block: [b000,bfeff000) 

Re: Networking Support in VirtualBox

2017-07-25 Thread Christian Helmuth
Hello Chris,

your log is very hard to read as it seems to arbitrarily concatenate
several logs or multiple copies of the same log. I suggest you store
one log file per boot of the test machine and attach the resulting
file to your email in the future.

I'll add some comments about what I read in your log in the following
(interleaved with the original/cleaned up log text).

On Mon, Jul 24, 2017 at 08:43:51PM -0400, Chris Rothrock wrote:
> NOVA Microhypervisor v7-8bcd6fc (x86_32): Jul 21 2017 10:14:19 [gcc 6.3.0]

You seem to run a 32-bit build of NOVA, which may not interfere with
your scenario but may not be what you intended to do.

> [ 0] TSC:3408373 kHz BUS:0 kHz
> [ 0] CORE:0:0:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
> [ 1] CORE:0:1:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
> [ 3] CORE:0:3:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
> [ 2] CORE:0:2:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
> [ 0] disabling super pages for DMAR
> Hypervisor features VMX
> Hypervisor reports 4x1 CPUs
> CPU ID (genode->kernel:package:core:thread) remapping
>  remap (0->0:0:0:0) boot cpu
>  remap (1->1:0:1:0)
>  remap (2->2:0:2:0)
>  remap (3->3:0:3:0)
> Hypervisor info page contains 24 memory descriptors:
> core image  [0010,029b9000)
> binaries region [00228000,029b9000) free for reuse
> detected physical memory: 0x - size: 0x0008ec00
> use  physical memory: 0x - size: 0x0008e000
> detected physical memory: 0x0010 - size: 0xb2bfb000
> use  physical memory: 0x0010 - size: 0xb2bfb000
> detected physical memory: 0xb3aff000 - size: 0x1000
> use  physical memory: 0xb3aff000 - size: 0x1000
> detected physical memory: 0x - size: 0x3f80

NOVA detects some unusable RAM above 4 GiB.

> :virt_alloc: Allocator 0x1f5f84 dump:
>  Block: [2000,3000) size=4K avail=0 max_avail=0
>  Block: [3000,4000) size=4K avail=0 max_avail=0
>  Block: [4000,5000) size=4K avail=0 max_avail=0
>  Block: [5000,6000) size=4K avail=0 max_avail=0
>  Block: [6000,7000) size=4K avail=0 max_avail=0
>  Block: [7000,8000) size=4K avail=0 max_avail=0
>  Block: [8000,9000) size=4K avail=0 max_avail=0
>  Block: [9000,a000) size=4K avail=0 max_avail=0
>  Block: [a000,b000) size=4K avail=0 max_avail=0
>  Block: [b000,c000) size=4K avail=0 max_avail=0
>  Block: [c000,d000) size=4K avail=0 max_avail=0
>  Block: [d000,e000) size=4K avail=0 max_avail=0
>  Block: [e000,f000) size=4K avail=0 max_avail=0
>  Block: [f000,0001) size=4K avail=0 max_avail=0
>  Block: [0001,00011000) size=4K avail=0 max_avail=0
>  Block: [00011000,00012000) size=4K avail=0 max_avail=0
>  Block: [00012000,00013000) size=4K avail=0 max_avail=0
>  Block: [00013000,00014000) size=4K avail=0 max_avail=2619220K
>  Block: [00014000,00015000) size=4K avail=0 max_avail=0
>  Block: [00015000,00016000) size=4K avail=0 max_avail=0
>  Block: [00016000,00017000) size=4K avail=0 max_avail=0
>  Block: [00017000,00018000) size=4K avail=0 max_avail=0
>  Block: [00018000,00019000) size=4K avail=0 max_avail=0
>  Block: [00019000,0001a000) size=4K avail=0 max_avail=908K
>  Block: [0001a000,0001b000) size=4K avail=0 max_avail=0
>  Block: [0001b000,0001c000) size=4K avail=0 max_avail=908K
>  Block: [0001c000,0001d000) size=4K avail=0 max_avail=0
>  Block: [0001d000,0010) size=908K avail=908K max_avail=908K
>  Block: [00228000,00229000) size=4K avail=0 max_avail=0
>  Block: [00229000,0022a000) size=4K avail=0 max_avail=2619220K
>  Block: [0022a000,0022b000) size=4K avail=0 max_avail=0
>  Block: [0022b000,a000) size=2619220K avail=2619220K max_avail=2619220K
>  Block: [b000,bfeff000) size=261116K avail=261116K max_avail=2619220K
>  Block: [bff04000,bfffd000) size=996K avail=996K max_avail=996K
>  => mem_size=2951536640 (2814 MB) / mem_avail=2951413760 (2814 MB)
> 
> :phys_alloc: Allocator 0x1f4f18 dump:
>  Block: [1000,2000) size=4K avail=0 max_avail=0
>  Block: [2000,3000) size=4K avail=0 max_avail=0
>  Block: [3000,4000) size=4K avail=0 max_avail=0
>  Block: [4000,5000) size=4K avail=0 max_avail=0
>  Block: [5000,6000) size=4K avail=0 max_avail=0
>  Block: [6000,7000) size=4K avail=0 max_avail=0
>  Block: [7000,8000) size=4K avail=0 max_avail=0
>  Block: [8000,9000) size=4K avail=0 max_avail=0
>  Block: [9000,a000) size=4K avail=0 max_avail=0
>  Block: [a000,b000) size=4K avail=0 max_avail=0
>  Block: [b000,c000) size=4K avail=0 max_avail=0
>  Block: [c000,d000) size=4K avail=0 max_avail=2014484K
>  Block: [d000,e000) size=4K avail=0 max_avail=0
>  Block: [e000,f000) size=4K avail=0 max_avail=0
>  Block: [f000,0001) size=4K avail=0 max_avail=0
>  Block: [0001,00011000) size=4K avail=0 max_avail=0
>  Block: [00011000,00012000) size=4K avail=0 

Re: Networking Support in VirtualBox

2017-07-24 Thread Chris Rothrock
My mistake, the previous output is the working build with no networking
enabled.  Below is the networking enabled:


NOVA Microhypervisor v7-8bcd6fc (x86_32): Jul 21 2017 10:14:19 [gcc 6.3.0]

[ 0] TSC:3408373 kHz BUS:0 kHz
[ 0] CORE:0:0:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
[ 1] CORE:0:1:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
[ 3] CORE:0:3:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
[ 2] CORE:0:2:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
[ 0] disabling super pages for DMAR
Hypervisor features VMX
Hypervisor reports 4x1 CPUs
CPU ID (genode->kernel:package:core:thread) remapping
 remap (0->0:0:0:0) boot cpu
 remap (1->1:0:1:0)
 remap (2->2:0:2:0)
 remap (3->3:0:3:0)
Hypervisor info page contains 24 memory descriptors:
core image  [0010,029b9000)
binaries region [00228000,029b9000) free for reuse
detected physical memory: 0x - size: 0x0008ec00
use  physical memory: 0x - size: 0x0008e000
detected physical memory: 0x0010 - size: 0xb2bfb000
use  physical memory: 0x0010 - size: 0xb2bfb000
detected physical memory: 0xb3aff000 - size: 0x1000
use  physical memory: 0xb3aff000 - size: 0x1000
detected physical memory: 0x - size: 0x3f80
:virt_alloc: Allocator 0x1f5f84 dump:
 Block: [2000,3000) size=4K avail=0 max_avail=0
 Block: [3000,4000) size=4K avail=0 max_avail=0
 Block: [4000,5000) size=4K avail=0 max_avail=0
 Block: [5000,6000) size=4K avail=0 max_avail=0
 Block: [6000,7000) size=4K avail=0 max_avail=0
 Block: [7000,8000) size=4K avail=0 max_avail=0
 Block: [8000,9000) size=4K avail=0 max_avail=0
 Block: [9000,a000) size=4K avail=0 max_avail=0
 Block: [a000,b000) size=4K avail=0 max_avail=0
 Block: [b000,c000) size=4K avail=0 max_avail=0
 Block: [c000,d000) size=4K avail=0 max_avail=0
 Block: [d000,e000) size=4K avail=0 max_avail=0
 Block: [e000,f000) size=4K avail=0 max_avail=0
 Block: [f000,0001) size=4K avail=0 max_avail=0
 Block: [0001,00011000) size=4K avail=0 max_avail=0
 Block: [00011000,00012000) size=4K avail=0 max_avail=0
 Block: [00012000,00013000) size=4K avail=0 max_avail=0
 Block: [00013000,00014000) size=4K avail=0 max_avail=2619220K
 Block: [00014000,00015000) size=4K avail=0 max_avail=0
 Block: [00015000,00016000) size=4K avail=0 max_avail=0
 Block: [00016000,00017000) size=4K avail=0 max_avail=0
 Block: [00017000,00018000) size=4K avail=0 max_avail=0
 Block: [00018000,00019000) size=4K avail=0 max_avail=0
 Block: [00019000,0001a000) size=4K avail=0 max_avail=908K
 Block: [0001a000,0001b000) size=4K avail=0 max_avail=0
 Block: [0001b000,0001c000) size=4K avail=0 max_avail=908K
 Block: [0001c000,0001d000) size=4K avail=0 max_avail=0
 Block: [0001d000,0010) size=908K avail=908K max_avail=908K
 Block: [00228000,00229000) size=4K avail=0 max_avail=0
 Block: [00229000,0022a000) size=4K avail=0 max_avail=2619220K
 Block: [0022a000,0022b000) size=4K avail=0 max_avail=0
 Block: [0022b000,a000) size=2619220K avail=2619220K max_avail=2619220K
 Block: [b000,bfeff000) size=261116K avail=261116K max_avail=2619220K
 Block: [bff04000,bfffd000) size=996K avail=996K max_avail=996K
 => mem_size=2951536640 (2814 MB) / mem_avail=2951413760 (2814 MB)

:phys_alloc: Allocator 0x1f4f18 dump:
 Block: [1000,2000) size=4K avail=0 max_avail=0
 Block: [2000,3000) size=4K avail=0 max_avail=0
 Block: [3000,4000) size=4K avail=0 max_avail=0
 Block: [4000,5000) size=4K avail=0 max_avail=0
 Block: [5000,6000) size=4K avail=0 max_avail=0
 Block: [6000,7000) size=4K avail=0 max_avail=0
 Block: [7000,8000) size=4K avail=0 max_avail=0
 Block: [8000,9000) size=4K avail=0 max_avail=0
 Block: [9000,a000) size=4K avail=0 max_avail=0
 Block: [a000,b000) size=4K avail=0 max_avail=0
 Block: [b000,c000) size=4K avail=0 max_avail=0
 Block: [c000,d000) size=4K avail=0 max_avail=2014484K
 Block: [d000,e000) size=4K avail=0 max_avail=0
 Block: [e000,f000) size=4K avail=0 max_avail=0
 Block: [f000,0001) size=4K avail=0 max_avail=0
 Block: [0001,00011000) size=4K avail=0 max_avail=0
 Block: [00011000,00012000) size=4K avail=0 max_avail=0
 Block: [00012000,00013000) size=4K avail=0 max_avail=0
 Block: [00013000,00014000) size=4K avail=0 max_avail=0
 Block: [00014000,00015000) size=4K avail=0 max_avail=0
 Block: [00015000,00016000) size=4K avail=0 max_avail=0
 Block: [00016000,00017000) size=4K avail=0 max_avail=0
 Block: [00017000,00018000) size=4K avail=0 max_avail=0
 Block: [00018000,00019000) size=4K avail=0 max_avail=2014484K
 Block: [00019000,0001a000) size=4K avail=0 max_avail=0
 Block: [0001a000,0001b000) size=4K avail=0 max_avail=452K
 Block: [0001b000,0001c000) size=4K avail=0 max_avail=0
 Block: [0001c000,0001d000) size=4K avail=0 max_avail=452K
 Block: 

Re: Networking Support in VirtualBox

2017-07-24 Thread Chris Rothrock
I have a desktop purchased for the sole purpose of Genode testing now
(finding new hardware with a serial port is almost impossible nowadays).  I
have the output of the serial port that fails to load the virtualbox VMs.
I looked through the log myself but I was unable to see a reason for the
failure.  Below is the output:

NOVA Microhypervisor v7-8bcd6fc (x86_32): Jul 21 2017 10:14:19 [gcc 6.3.0]

[ 0] TSC:3408373 kHz BUS:0 kHz
[ 0] CORE:0:0:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
[ 1] CORE:0:1:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
[ 3] CORE:0:3:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
[ 2] CORE:0:2:0 6:9e:9:1 [48] Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
[ 0] disabling super pages for DMAR
Hypervisor features VMX
Hypervisor reports 4x1 CPUs
CPU ID (genode->kernel:package:core:thread) remapping
 remap (0->0:0:0:0) boot cpu
 remap (1->1:0:1:0)
 remap (2->2:0:2:0)
 remap (3->3:0:3:0)
Hypervisor info page contains 24 memory descriptors:
core image  [0010,029b9000)
binaries region [00228000,029b9000) free for reuse
detected physical memory: 0x - size: 0x0008ec00
use  physical memory: 0x - size: 0x0008e000
detected physical memory: 0x0010 - size: 0xb2bfb000
use  physical memory: 0x0010 - size: 0xb2bfb000
detected physical memory: 0xb3aff000 - size: 0x1000
use  physical memory: 0xb3aff000 - size: 0x1000
detected physical memory: 0x - size: 0x3f80
:virt_alloc: Allocator 0x1f5f84 dump:
 Block: [2000,3000) size=4K avail=0 max_avail=0
 Block: [3000,4000) size=4K avail=0 max_avail=0
 Block: [4000,5000) size=4K avail=0 max_avail=0
 Block: [5000,6000) size=4K avail=0 max_avail=0
 Block: [6000,7000) size=4K avail=0 max_avail=0
 Block: [7000,8000) size=4K avail=0 max_avail=0
 Block: [8000,9000) size=4K avail=0 max_avail=0
 Block: [9000,a000) size=4K avail=0 max_avail=0
 Block: [a000,b000) size=4K avail=0 max_avail=0
 Block: [b000,c000) size=4K avail=0 max_avail=0
 Block: [c000,d000) size=4K avail=0 max_avail=0
 Block: [d000,e000) size=4K avail=0 max_avail=0
 Block: [e000,f000) size=4K avail=0 max_avail=0
 Block: [f000,0001) size=4K avail=0 max_avail=0
 Block: [0001,00011000) size=4K avail=0 max_avail=0
 Block: [00011000,00012000) size=4K avail=0 max_avail=0
 Block: [00012000,00013000) size=4K avail=0 max_avail=0
 Block: [00013000,00014000) size=4K avail=0 max_avail=2619220K
 Block: [00014000,00015000) size=4K avail=0 max_avail=0
 Block: [00015000,00016000) size=4K avail=0 max_avail=0
 Block: [00016000,00017000) size=4K avail=0 max_avail=0
 Block: [00017000,00018000) size=4K avail=0 max_avail=0
 Block: [00018000,00019000) size=4K avail=0 max_avail=0
 Block: [00019000,0001a000) size=4K avail=0 max_avail=908K
 Block: [0001a000,0001b000) size=4K avail=0 max_avail=0
 Block: [0001b000,0001c000) size=4K avail=0 max_avail=908K
 Block: [0001c000,0001d000) size=4K avail=0 max_avail=0
 Block: [0001d000,0010) size=908K avail=908K max_avail=908K
 Block: [00228000,00229000) size=4K avail=0 max_avail=0
 Block: [00229000,0022a000) size=4K avail=0 max_avail=2619220K
 Block: [0022a000,0022b000) size=4K avail=0 max_avail=0
 Block: [0022b000,a000) size=2619220K avail=2619220K max_avail=2619220K
 Block: [b000,bfeff000) size=261116K avail=261116K max_avail=2619220K
 Block: [bff04000,bfffd000) size=996K avail=996K max_avail=996K
 => mem_size=2951536640 (2814 MB) / mem_avail=2951413760 (2814 MB)

:phys_alloc: Allocator 0x1f4f18 dump:
 Block: [1000,2000) size=4K avail=0 max_avail=0
 Block: [2000,3000) size=4K avail=0 max_avail=0
 Block: [3000,4000) size=4K avail=0 max_avail=0
 Block: [4000,5000) size=4K avail=0 max_avail=0
 Block: [5000,6000) size=4K avail=0 max_avail=0
 Block: [6000,7000) size=4K avail=0 max_avail=0
 Block: [7000,8000) size=4K avail=0 max_avail=0
 Block: [8000,9000) size=4K avail=0 max_avail=0
 Block: [9000,a000) size=4K avail=0 max_avail=0
 Block: [a000,b000) size=4K avail=0 max_avail=0
 Block: [b000,c000) size=4K avail=0 max_avail=0
 Block: [c000,d000) size=4K avail=0 max_avail=2014484K
 Block: [d000,e000) size=4K avail=0 max_avail=0
 Block: [e000,f000) size=4K avail=0 max_avail=0
 Block: [f000,0001) size=4K avail=0 max_avail=0
 Block: [0001,00011000) size=4K avail=0 max_avail=0
 Block: [00011000,00012000) size=4K avail=0 max_avail=0
 Block: [00012000,00013000) size=4K avail=0 max_avail=0
 Block: [00013000,00014000) size=4K avail=0 max_avail=0
 Block: [00014000,00015000) size=4K avail=0 max_avail=0
 Block: [00015000,00016000) size=4K avail=0 max_avail=0
 Block: [00016000,00017000) size=4K avail=0 max_avail=0
 Block: [00017000,00018000) size=4K avail=0 max_avail=0
 Block: [00018000,00019000) size=4K avail=0 max_avail=2014484K
 Block: [00019000,0001a000) size=4K avail=0 

Re: Networking Support in VirtualBox

2017-07-10 Thread Alexander Boettcher
On 06.07.2017 22:10, Chris Rothrock wrote:
> leaves me back at my starting point - I have no means of obtaining serial
> log data from a hardware boot.

Seriously ? I can't believe.

Getting a test machine with the minimal requirement of getting serial
log output is fundamental to be productive. (There are so cheap (for a
company/project) refurbished Intel notebooks available with Intel AMT
SOL on board ...)

Nevertheless,

I updated [0] and added some (experimental/untested) features. Mainly
they take care to capture the log output in 'core' and a graphical
terminal will show them. Obviously, this only make sense if you managed
to boot Genode into the graphical environment and all went fine. (Which
sounds so in your case).

Good luck,

Alex.

[0] https://github.com/alex-ab/genode/commits/staging_vbox_run

> 
> On Thu, Jul 6, 2017 at 4:00 PM, Alexander Boettcher <
> alexander.boettc...@genode-labs.com> wrote:
> 
>> Hi,
>>
>> On 06.07.2017 21:40, Chris Rothrock wrote:
>>> Scratch that, I found the issue with this specific error (I had in the
>>> virtualbox.run recipe the nic_drv and nic_bridge commented out for
>>> troubleshooting).  I have enabled these again and now have new errors
>>> listed below.  The entire serial output listed below:
>>>
>>> NOVA Microhypervisor v7-8bcd6fc (x86_64): Jun  6 2017 12:07:06 [gcc
>> 6.3.0]
>>>
>>> [ 0] TSC:2637247 kHz BUS:1017434 kHz
>>> [ 0] CORE:0:0:0 6:f:b:0 [0] Intel(R) Core(TM)2 Duo CPU T7700  @
>> 2.40GHz
>>> Hypervisor reports 1x1 CPU
>>> Warning: CPU has no invariant TSC.
>>> CPU ID (genode->kernel:package:core:thread) remapping
>>>  remap (0->0:0:0:0) boot cpu
>>
>> You either running in a VM (looks bit like Qemu as VMM? [on XEN ?]) or
>> your CPU is really old, not to say odd. The invariant TSC is suspicious.
>> Only 1 CPU is more suspicious. According to Intel [0] it has 2 cores and
>> has hardware virtualization support (Vt-x).
>>
>> [0]
>> http://ark.intel.com/products/29762/Intel-Core2-Duo-
>> Processor-T7700-4M-Cache-2_40-GHz-800-MHz-FSB
>>
>>> [init -> vbox1] Warning: No virtualization hardware acceleration
>> available
>>> [init -> vbox2] Warning: No virtualization hardware acceleration
>> available
>>
>> Your CPU has no hardware support for virtualization. You either are not
>> running on real hardware or the feature is not turned on in your BIOS
>> (which sometimes is disabled by default by the PC vendors.)
>>
>> Cheers,
>>
>> --
>> Alexander Boettcher
>> Genode Labs
>>
>> http://www.genode-labs.com - http://www.genode.org
>>
>> Genode Labs GmbH - Amtsgericht Dresden - HRB 28424 - Sitz Dresden
>> Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
>>
> 
> 
> 

-- 
Alexander Boettcher
Genode Labs

http://www.genode-labs.com - http://www.genode.org

Genode Labs GmbH - Amtsgericht Dresden - HRB 28424 - Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
genode-main mailing list
genode-main@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/genode-main


Re: Networking Support in VirtualBox

2017-07-06 Thread Chris Rothrock
I am running through a VM (Ubuntu Linux 16.04 in VirtualBox).  I was unable
to get my laptop with a serial port to work, it looks like it died so to
obtain serial logs, I launched my compile through this VM using the QEMU
module.  The hardware this VM is on is anything but old - it's an 8 core
Intel on an Alienware R4 laptop.  I have 4 cores, 8 GB memory and 128 MB
video RAM allocated to the VM with all virtualization support enabled.  I
am guessing that running this build through a VM will not work, which
leaves me back at my starting point - I have no means of obtaining serial
log data from a hardware boot.

On Thu, Jul 6, 2017 at 4:00 PM, Alexander Boettcher <
alexander.boettc...@genode-labs.com> wrote:

> Hi,
>
> On 06.07.2017 21:40, Chris Rothrock wrote:
> > Scratch that, I found the issue with this specific error (I had in the
> > virtualbox.run recipe the nic_drv and nic_bridge commented out for
> > troubleshooting).  I have enabled these again and now have new errors
> > listed below.  The entire serial output listed below:
> >
> > NOVA Microhypervisor v7-8bcd6fc (x86_64): Jun  6 2017 12:07:06 [gcc
> 6.3.0]
> >
> > [ 0] TSC:2637247 kHz BUS:1017434 kHz
> > [ 0] CORE:0:0:0 6:f:b:0 [0] Intel(R) Core(TM)2 Duo CPU T7700  @
> 2.40GHz
> > Hypervisor reports 1x1 CPU
> > Warning: CPU has no invariant TSC.
> > CPU ID (genode->kernel:package:core:thread) remapping
> >  remap (0->0:0:0:0) boot cpu
>
> You either running in a VM (looks bit like Qemu as VMM? [on XEN ?]) or
> your CPU is really old, not to say odd. The invariant TSC is suspicious.
> Only 1 CPU is more suspicious. According to Intel [0] it has 2 cores and
> has hardware virtualization support (Vt-x).
>
> [0]
> http://ark.intel.com/products/29762/Intel-Core2-Duo-
> Processor-T7700-4M-Cache-2_40-GHz-800-MHz-FSB
>
> > [init -> vbox1] Warning: No virtualization hardware acceleration
> available
> > [init -> vbox2] Warning: No virtualization hardware acceleration
> available
>
> Your CPU has no hardware support for virtualization. You either are not
> running on real hardware or the feature is not turned on in your BIOS
> (which sometimes is disabled by default by the PC vendors.)
>
> Cheers,
>
> --
> Alexander Boettcher
> Genode Labs
>
> http://www.genode-labs.com - http://www.genode.org
>
> Genode Labs GmbH - Amtsgericht Dresden - HRB 28424 - Sitz Dresden
> Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
>



-- 


Thank You,

Chris Rothrock
Senior System Administrator
(315) 308-1637
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
genode-main mailing list
genode-main@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/genode-main


Re: Networking Support in VirtualBox

2017-07-06 Thread Alexander Boettcher
Hi,

On 06.07.2017 21:40, Chris Rothrock wrote:
> Scratch that, I found the issue with this specific error (I had in the
> virtualbox.run recipe the nic_drv and nic_bridge commented out for
> troubleshooting).  I have enabled these again and now have new errors
> listed below.  The entire serial output listed below:
> 
> NOVA Microhypervisor v7-8bcd6fc (x86_64): Jun  6 2017 12:07:06 [gcc 6.3.0]
> 
> [ 0] TSC:2637247 kHz BUS:1017434 kHz
> [ 0] CORE:0:0:0 6:f:b:0 [0] Intel(R) Core(TM)2 Duo CPU T7700  @ 2.40GHz
> Hypervisor reports 1x1 CPU
> Warning: CPU has no invariant TSC.
> CPU ID (genode->kernel:package:core:thread) remapping
>  remap (0->0:0:0:0) boot cpu

You either running in a VM (looks bit like Qemu as VMM? [on XEN ?]) or
your CPU is really old, not to say odd. The invariant TSC is suspicious.
Only 1 CPU is more suspicious. According to Intel [0] it has 2 cores and
has hardware virtualization support (Vt-x).

[0]
http://ark.intel.com/products/29762/Intel-Core2-Duo-Processor-T7700-4M-Cache-2_40-GHz-800-MHz-FSB

> [init -> vbox1] Warning: No virtualization hardware acceleration available
> [init -> vbox2] Warning: No virtualization hardware acceleration available

Your CPU has no hardware support for virtualization. You either are not
running on real hardware or the feature is not turned on in your BIOS
(which sometimes is disabled by default by the PC vendors.)

Cheers,

-- 
Alexander Boettcher
Genode Labs

http://www.genode-labs.com - http://www.genode.org

Genode Labs GmbH - Amtsgericht Dresden - HRB 28424 - Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
genode-main mailing list
genode-main@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/genode-main


Re: Networking Support in VirtualBox

2017-07-06 Thread Chris Rothrock
Yes, I found that I had those commented out for troubleshooting.  I resent
the new error log with assertion failures.

On Thu, Jul 6, 2017 at 3:32 PM, Alexander Boettcher <
alexander.boettc...@genode-labs.com> wrote:

> Hello,
>
> On 06.07.2017 20:50, Chris Rothrock wrote:
> > I have a serial output available now to help isolate the issue.  Below is
> > the entire output but it seems that the issue is nic_drv and nic_bridge
> (as
> > well as log_terminal) are being denied the ROM session necessary.  Any
> > thoughts as to why?
>
> In the list of available ROM modules, the nic_bridge, the nic_drv and
> the log_terminal is missing:
>
> > :rom_fs: ROM modules:
> >  ROM: [7fe07000,7fe1a7e0) acpi_drv
> >  ROM: [7db6c000,7db6de6a) config
> >  ROM: [7fb0f000,7fdfccf8) core.o
> >  ROM: [7f8fb000,7f977cd8) device_pd
> >  ROM: [7ff9d000,7ffdedd0) fb_drv
> >  ROM: [00019000,0001a000) hypervisor_info_page
> >  ROM: [7e89,7e8cc6d0) init
> >  ROM: [7db6e000,7dc19bf8) ld.lib.so
> >  ROM: [7d9db000,7db0c050) libc.lib.so
> >  ROM: [7db4b000,7db53130) libc_pipe.lib.so
> >  ROM: [7fa01000,7fa0fd70) libc_terminal.lib.so
> >  ROM: [7fa1,7faf4d40) libiconv.lib.so
> >  ROM: [7f9d9000,7fa004a0) libm.lib.so
> >  ROM: [7db54000,7db6b9c8) nit_fb
> >  ROM: [7db0d000,7db4a2e0) nitpicker
> >  ROM: [7f99e000,7f9d8790) platform_drv
> >  ROM: [7f8cd000,7f8e5cf8) ps2_drv
> >  ROM: [7fdfd000,7fe066f0) pthread.lib.so
> >  ROM: [7f978000,7f99de18) qemu-usb.lib.so
> >  ROM: [7e878000,7e88f770) report_rom
> >  ROM: [7fe1b000,7fe2ba08) rtc_drv
> >  ROM: [7fe2d000,7ff9cae0) stdcxx.lib.so
> >  ROM: [7e8cd000,7f8cd000) test.iso
> >  ROM: [7fe2c000,7fe2cfb1) test.vbox
> >  ROM: [7f8e6000,7f8fa6a8) timer
> >  ROM: [7faf5000,7fb0e0a0) vbox_pointer
> >  ROM: [7dc1a000,7e877678) virtualbox5-nova
>
> Please check that the files are part of your boot_module list in your
> run script and actually really got added.
>
> Cheers,
>
> --
> Alexander Boettcher
> Genode Labs
>
> http://www.genode-labs.com - http://www.genode.org
>
> Genode Labs GmbH - Amtsgericht Dresden - HRB 28424 - Sitz Dresden
> Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
>



-- 


Thank You,

Chris Rothrock
Senior System Administrator
(315) 308-1637
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
genode-main mailing list
genode-main@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/genode-main


Re: Networking Support in VirtualBox

2017-07-06 Thread Chris Rothrock
Scratch that, I found the issue with this specific error (I had in the
virtualbox.run recipe the nic_drv and nic_bridge commented out for
troubleshooting).  I have enabled these again and now have new errors
listed below.  The entire serial output listed below:




NOVA Microhypervisor v7-8bcd6fc (x86_64): Jun  6 2017 12:07:06 [gcc 6.3.0]

[ 0] TSC:2637247 kHz BUS:1017434 kHz
[ 0] CORE:0:0:0 6:f:b:0 [0] Intel(R) Core(TM)2 Duo CPU T7700  @ 2.40GHz
Hypervisor reports 1x1 CPU
Warning: CPU has no invariant TSC.
CPU ID (genode->kernel:package:core:thread) remapping
 remap (0->0:0:0:0) boot cpu
Hypervisor info page contains 8 memory descriptors:
core image  [0010,028a)
binaries region [00225000,028a) free for reuse
detected physical memory: 0x - size: 0x0009fc00
use  physical memory: 0x - size: 0x0009f000
detected physical memory: 0x0010 - size: 0x7fee
use  physical memory: 0x0010 - size: 0x7fee
:virt_alloc: Allocator 0x1e66f0 dump:
 Block: [2000,3000) size=4K avail=0 max_avail=0
 Block: [3000,4000) size=4K avail=0 max_avail=0
 Block: [4000,5000) size=4K avail=0 max_avail=0
 Block: [5000,6000) size=4K avail=0 max_avail=0
 Block: [6000,7000) size=4K avail=0 max_avail=0
 Block: [7000,8000) size=4K avail=0 max_avail=0
 Block: [8000,9000) size=4K avail=0 max_avail=0
 Block: [9000,a000) size=4K avail=0 max_avail=0
 Block: [a000,b000) size=4K avail=0 max_avail=0
 Block: [b000,c000) size=4K avail=0 max_avail=0
 Block: [c000,d000) size=4K avail=0 max_avail=0
 Block: [d000,e000) size=4K avail=0 max_avail=0
 Block: [e000,f000) size=4K avail=0 max_avail=0
 Block: [f000,0001) size=4K avail=0 max_avail=0
 Block: [0001,00011000) size=4K avail=0 max_avail=0
 Block: [00011000,00012000) size=4K avail=0 max_avail=0
 Block: [00012000,00013000) size=4K avail=0 max_avail=0
 Block: [00013000,00014000) size=4K avail=0
max_avail=137434760164K
 Block: [00014000,00015000) size=4K avail=0 max_avail=0
 Block: [00015000,00016000) size=4K avail=0 max_avail=0
 Block: [00016000,00017000) size=4K avail=0 max_avail=0
 Block: [00017000,00018000) size=4K avail=0 max_avail=0
 Block: [00018000,00019000) size=4K avail=0 max_avail=0
 Block: [00019000,0001a000) size=4K avail=0 max_avail=908K
 Block: [0001a000,0001b000) size=4K avail=0 max_avail=0
 Block: [0001b000,0001c000) size=4K avail=0 max_avail=908K
 Block: [0001c000,0001d000) size=4K avail=0 max_avail=0
 Block: [0001d000,0010) size=908K avail=908K
max_avail=908K
 Block: [00225000,00226000) size=4K avail=0 max_avail=0
 Block: [00226000,00227000) size=4K avail=0
max_avail=137434760164K
 Block: [00227000,00228000) size=4K avail=0 max_avail=0
 Block: [00228000,a000) size=2619232K avail=2619232K
max_avail=2619232K
 Block: [b000,bfeff000) size=261116K avail=261116K
max_avail=137434760164K
 Block: [bff04000,7fffbfffd000) size=137434760164K
avail=137434760164K max_avail=137434760164K
 => mem_size=140736144936960 (134216446 MB) / mem_avail=140736144814080
(134216446 MB)

:phys_alloc: Allocator 0x1e5620 dump:
 Block: [1000,2000) size=4K avail=0 max_avail=0
 Block: [2000,3000) size=4K avail=0 max_avail=0
 Block: [3000,4000) size=4K avail=0 max_avail=0
 Block: [4000,5000) size=4K avail=0 max_avail=0
 Block: [5000,6000) size=4K avail=0 max_avail=0
 Block: [6000,7000) size=4K avail=0 max_avail=0
 Block: [7000,8000) size=4K avail=0 max_avail=0
 Block: [8000,9000) size=4K avail=0 max_avail=0
 Block: [9000,a000) size=4K avail=0 max_avail=0
 Block: [a000,b000) size=4K avail=0 max_avail=0
 Block: [b000,c000) size=4K avail=0 max_avail=0
 Block: [c000,d000) size=4K avail=0
max_avail=2015480K
 Block: [d000,e000) size=4K avail=0 max_avail=0
 Block: [e000,f000) size=4K avail=0 max_avail=0
 Block: [f000,0001) size=4K avail=0 max_avail=0
 Block: [0001,00011000) size=4K avail=0 max_avail=0
 Block: [00011000,00012000) size=4K 

Re: Networking Support in VirtualBox

2017-07-06 Thread Alexander Boettcher
Hello,

On 06.07.2017 20:50, Chris Rothrock wrote:
> I have a serial output available now to help isolate the issue.  Below is
> the entire output but it seems that the issue is nic_drv and nic_bridge (as
> well as log_terminal) are being denied the ROM session necessary.  Any
> thoughts as to why?

In the list of available ROM modules, the nic_bridge, the nic_drv and
the log_terminal is missing:

> :rom_fs: ROM modules:
>  ROM: [7fe07000,7fe1a7e0) acpi_drv
>  ROM: [7db6c000,7db6de6a) config
>  ROM: [7fb0f000,7fdfccf8) core.o
>  ROM: [7f8fb000,7f977cd8) device_pd
>  ROM: [7ff9d000,7ffdedd0) fb_drv
>  ROM: [00019000,0001a000) hypervisor_info_page
>  ROM: [7e89,7e8cc6d0) init
>  ROM: [7db6e000,7dc19bf8) ld.lib.so
>  ROM: [7d9db000,7db0c050) libc.lib.so
>  ROM: [7db4b000,7db53130) libc_pipe.lib.so
>  ROM: [7fa01000,7fa0fd70) libc_terminal.lib.so
>  ROM: [7fa1,7faf4d40) libiconv.lib.so
>  ROM: [7f9d9000,7fa004a0) libm.lib.so
>  ROM: [7db54000,7db6b9c8) nit_fb
>  ROM: [7db0d000,7db4a2e0) nitpicker
>  ROM: [7f99e000,7f9d8790) platform_drv
>  ROM: [7f8cd000,7f8e5cf8) ps2_drv
>  ROM: [7fdfd000,7fe066f0) pthread.lib.so
>  ROM: [7f978000,7f99de18) qemu-usb.lib.so
>  ROM: [7e878000,7e88f770) report_rom
>  ROM: [7fe1b000,7fe2ba08) rtc_drv
>  ROM: [7fe2d000,7ff9cae0) stdcxx.lib.so
>  ROM: [7e8cd000,7f8cd000) test.iso
>  ROM: [7fe2c000,7fe2cfb1) test.vbox
>  ROM: [7f8e6000,7f8fa6a8) timer
>  ROM: [7faf5000,7fb0e0a0) vbox_pointer
>  ROM: [7dc1a000,7e877678) virtualbox5-nova

Please check that the files are part of your boot_module list in your
run script and actually really got added.

Cheers,

-- 
Alexander Boettcher
Genode Labs

http://www.genode-labs.com - http://www.genode.org

Genode Labs GmbH - Amtsgericht Dresden - HRB 28424 - Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
genode-main mailing list
genode-main@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/genode-main


Re: Networking Support in VirtualBox

2017-07-06 Thread Chris Rothrock
I have a serial output available now to help isolate the issue.  Below is
the entire output but it seems that the issue is nic_drv and nic_bridge (as
well as log_terminal) are being denied the ROM session necessary.  Any
thoughts as to why?


warning: TCG doesn't support requested feature: CPUID.01H:EDX.vme [bit 1]
Bender: Hello World.
Need 0275 bytes to relocate modules.
Relocating to 7d89:
Copying 41066688 bytes...
Copying 147640 bytes...


NOVA Microhypervisor v7-8bcd6fc (x86_64): Jun  6 2017 12:07:06 [gcc 6.3.0]

[ 0] TSC:2643542 kHz BUS:1019894 kHz
[ 0] CORE:0:0:0 6:f:b:0 [0] Intel(R) Core(TM)2 Duo CPU T7700  @ 2.40GHz
Hypervisor reports 1x1 CPU
Warning: CPU has no invariant TSC.
CPU ID (genode->kernel:package:core:thread) remapping
 remap (0->0:0:0:0) boot cpu
Hypervisor info page contains 8 memory descriptors:
core image  [0010,02829000)
binaries region [00225000,02829000) free for reuse
detected physical memory: 0x - size: 0x0009fc00
use  physical memory: 0x - size: 0x0009f000
detected physical memory: 0x0010 - size: 0x7fee
use  physical memory: 0x0010 - size: 0x7fee
:virt_alloc: Allocator 0x1e65f0 dump:
 Block: [2000,3000) size=4K avail=0 max_avail=0
 Block: [3000,4000) size=4K avail=0 max_avail=0
 Block: [4000,5000) size=4K avail=0 max_avail=0
 Block: [5000,6000) size=4K avail=0 max_avail=0
 Block: [6000,7000) size=4K avail=0 max_avail=0
 Block: [7000,8000) size=4K avail=0 max_avail=0
 Block: [8000,9000) size=4K avail=0 max_avail=0
 Block: [9000,a000) size=4K avail=0 max_avail=0
 Block: [a000,b000) size=4K avail=0 max_avail=0
 Block: [b000,c000) size=4K avail=0 max_avail=0
 Block: [c000,d000) size=4K avail=0 max_avail=0
 Block: [d000,e000) size=4K avail=0 max_avail=0
 Block: [e000,f000) size=4K avail=0 max_avail=0
 Block: [f000,0001) size=4K avail=0 max_avail=0
 Block: [0001,00011000) size=4K avail=0 max_avail=0
 Block: [00011000,00012000) size=4K avail=0 max_avail=0
 Block: [00012000,00013000) size=4K avail=0 max_avail=0
 Block: [00013000,00014000) size=4K avail=0
max_avail=137434760164K
 Block: [00014000,00015000) size=4K avail=0 max_avail=0
 Block: [00015000,00016000) size=4K avail=0 max_avail=0
 Block: [00016000,00017000) size=4K avail=0 max_avail=920K
 Block: [00017000,00018000) size=4K avail=0 max_avail=0
 Block: [00018000,00019000) size=4K avail=0 max_avail=920K
 Block: [00019000,0001a000) size=4K avail=0 max_avail=0
 Block: [0001a000,0010) size=920K avail=920K
max_avail=920K
 Block: [00225000,00226000) size=4K avail=0 max_avail=0
 Block: [00226000,00227000) size=4K avail=0
max_avail=137434760164K
 Block: [00227000,00228000) size=4K avail=0 max_avail=0
 Block: [00228000,a000) size=2619232K avail=2619232K
max_avail=2619232K
 Block: [b000,bfeff000) size=261116K avail=261116K
max_avail=137434760164K
 Block: [bff04000,7fffbfffd000) size=137434760164K
avail=137434760164K max_avail=137434760164K
 => mem_size=140736144936960 (134216446 MB) / mem_avail=140736144826368
(134216446 MB)

:phys_alloc: Allocator 0x1e5520 dump:
 Block: [1000,2000) size=4K avail=0 max_avail=0
 Block: [2000,3000) size=4K avail=0 max_avail=0
 Block: [3000,4000) size=4K avail=0 max_avail=0
 Block: [4000,5000) size=4K avail=0 max_avail=0
 Block: [5000,6000) size=4K avail=0 max_avail=0
 Block: [6000,7000) size=4K avail=0 max_avail=0
 Block: [7000,8000) size=4K avail=0 max_avail=0
 Block: [8000,9000) size=4K avail=0 max_avail=0
 Block: [9000,a000) size=4K avail=0 max_avail=0
 Block: [a000,b000) size=4K avail=0 max_avail=0
 Block: [b000,c000) size=4K avail=0 max_avail=0
 Block: [c000,d000) size=4K avail=0
max_avail=2015956K
 Block: [d000,e000) size=4K avail=0 max_avail=0
 Block: [e000,f000) size=4K avail=0 max_avail=0
 Block: [f000,0001) size=4K avail=0 max_avail=0
 Block: [0001,00011000) size=4K avail=0 max_avail=0
 Block: [00011000,00012000) size=4K avail=0 max_avail=0
 Block: 

Re: Networking Support in VirtualBox

2017-06-23 Thread Nobody III
Would it work to get logging output over LAN using terminal_log and
tcp_terminal?

On Fri, Jun 23, 2017 at 3:54 AM, Alexander Boettcher <
alexander.boettc...@genode-labs.com> wrote:

> Hello,
>
> On 22.06.2017 20:29, Chris Rothrock wrote:
> > Here is what I tried to fix this:
> > I have increased the caps on the nic_bridge to 200
> > increased the caps on the vbox1 and vbox2 to 500
> > I removed the nic bridge from the config for one vbox in the
> virtualbox.run
> > (to see if I can get video on even one)
>
> your serial log output would tell you what goes wrong and you could fix
> it (with high probably) in less then 5 minutes.
>
> If you have some Intel vPro machine, there you me use the Intel AMT SOL
> (SerialOverLine) feature to capture the log (if you managed to configure
> it correctly).
>
> In principle, without serial output (PCI serial card, Mini PCI
> card/PCMCIA for notebooks, builtin UART/serial device ...) this kind of
> try and error play is useless.
>
> > Nothing I changed made any difference.  As long as enabled="true" was
> set,
> > neither VM loaded at all.  This is booting from physical hardware, not
> in a
> > virtualized environment.  Thoughts?
>
> Is the network device in your native machine supported by our network
> driver? lwip.run can be used for a simple test first.
>
> I attached my serial log output of virtualbox.run with network and 2 VMs
> from a oldish Lenovo X201 Thinkpad (using amtterm to get Intel AMT SOL
> output.)
>
> Regards,
>
> >> [0] https://github.com/alex-ab/genode/commits/staging_vbox_run
>
> --
> Alexander Boettcher
> Genode Labs
>
> http://www.genode-labs.com - http://www.genode.org
>
> Genode Labs GmbH - Amtsgericht Dresden - HRB 28424 - Sitz Dresden
> Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> genode-main mailing list
> genode-main@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/genode-main
>
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
genode-main mailing list
genode-main@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/genode-main


Re: Networking Support in VirtualBox

2017-06-23 Thread Alexander Boettcher
Hello,

On 22.06.2017 20:29, Chris Rothrock wrote:
> Here is what I tried to fix this:
> I have increased the caps on the nic_bridge to 200
> increased the caps on the vbox1 and vbox2 to 500
> I removed the nic bridge from the config for one vbox in the virtualbox.run
> (to see if I can get video on even one)

your serial log output would tell you what goes wrong and you could fix
it (with high probably) in less then 5 minutes.

If you have some Intel vPro machine, there you me use the Intel AMT SOL
(SerialOverLine) feature to capture the log (if you managed to configure
it correctly).

In principle, without serial output (PCI serial card, Mini PCI
card/PCMCIA for notebooks, builtin UART/serial device ...) this kind of
try and error play is useless.

> Nothing I changed made any difference.  As long as enabled="true" was set,
> neither VM loaded at all.  This is booting from physical hardware, not in a
> virtualized environment.  Thoughts?

Is the network device in your native machine supported by our network
driver? lwip.run can be used for a simple test first.

I attached my serial log output of virtualbox.run with network and 2 VMs
from a oldish Lenovo X201 Thinkpad (using amtterm to get Intel AMT SOL
output.)

Regards,

>> [0] https://github.com/alex-ab/genode/commits/staging_vbox_run

-- 
Alexander Boettcher
Genode Labs

http://www.genode-labs.com - http://www.genode.org

Genode Labs GmbH - Amtsgericht Dresden - HRB 28424 - Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
make[1]: Leaving directory '/home/user/genode.staging/build/x86_64'
genode build completed
using 'core-nova.o' as 'core.o'
using 'ld-nova.lib.so' as 'ld.lib.so'
using 'nova_timer_drv' as 'timer'
using 'ld-nova.lib.so' as 'ld.lib.so'
spawn amttool x201-amt.test.labs reset
host x201.test.labs, reset [y/N] ? y
execute: reset
result: pt_status: success
 Warning: could not check AMT SOL redirection service because of missing wsman 
tool, --amt-tool==amttool
spawn /bin/sh -c amtterm -u admin -v x201-amt.test.labs
amtterm: NONE -> CONNECT (connection to host)
ipv4 x201-amt.test.labs [10.0.0.232] 16994 open
amtterm: CONNECT -> INIT (redirection initialization)
amtterm: INIT -> AUTH (session authentication)
amtterm: AUTH -> INIT_SOL (serial-over-lan initialization)
amtterm: INIT_SOL -> RUN_SOL (serial-over-lan active)
serial-over-lan redirection ok
connected now, use ^] to escape
Bender: Hello World.

Need 0797e000 bytes to relocate modules.

Relocating to 78682000: 

Copying 127240640 bytes...

Copying 149000 bytes...



NOVA Microhypervisor v7-2006635 (x86_64): Jun 23 2017 11:36:18 [gcc 6.3.0] 
[MBI]



[ 0] TSC:2399940 kHz BUS:10 kHz

[ 0] CORE:0:0:0 6:25:5:4 [3] Intel(R) Core(TM) i5 CPU   M 520  @ 2.40GHz

[ 1] CORE:0:0:1 6:25:5:4 [3] Intel(R) Core(TM) i5 CPU   M 520  @ 2.40GHz

[ 2] CORE:0:2:0 6:25:5:4 [3] Intel(R) Core(TM) i5 CPU   M 520  @ 2.40GHz

[ 3] CORE:0:2:1 6:25:5:4 [3] Intel(R) Core(TM) i5 CPU   M 520  @ 2.40GHz

[ 0] DMAR:0x81036078 FRR:0 FR:0x5 BDF:0:2:0 FI:0xff7fff000

Hypervisor features VMX

Hypervisor reports 4x1 CPUs

CPU ID (genode->kernel:package:core:thread) remapping

 remap (0->0:0:0:0) boot cpu

 remap (1->2:0:2:0) 

 remap (2->1:0:0:1) 

 remap (3->3:0:2:1) 

Hypervisor info page contains 41 memory descriptors:

core image  [0010,07a58000)

binaries region [00226000,07a58000) free for reuse

detected physical memory: 0x - size: 0x00089400

use  physical memory: 0x - size: 0x00089000

detected physical memory: 0x0010 - size: 0xbb17c000

use  physical memory: 0x0010 - size: 0xbb17c000

detected physical memory: 0xbb282000 - size: 0x000dd000

use  physical memory: 0xbb282000 - size: 0x000dd000

detected physical memory: 0xbb40f000 - size: 0x0006

use  physical memory: 0xbb40f000 - size: 0x0006

detected physical memory: 0xbb70f000 - size: 0x8000

use  physical memory: 0xbb70f000 - size: 0x8000

detected physical memory: 0xbb71f000 - size: 0x0004c000

use  physical memory: 0xbb71f000 - size: 0x0004c000

detected physical memory: 0xbb7ff000 - size: 0x1000

use  physical memory: 0xbb7ff000 - size: 0x1000

detected physical memory: 0x0001 - size: 0x3800

use  physical memory: 0x0001 - size: 0x3800

:virt_alloc: Allocator 0x1e76f0 dump:

 Block: [2000,3000) size=4K avail=0 max_avail=0

 Block: [3000,4000) size=4K avail=0 max_avail=0

 Block: [4000,5000) size=4K avail=0 max_avail=0

 Block: 

Re: Networking Support in VirtualBox

2017-06-22 Thread Chris Rothrock
I am using all of your latest commit changes and I can get everything to
work except if the test.vbox file has Adapter 0 enabled="true".  if set to
false (obviously there is no networking) but the two VM's boot into their
respective windows, I can interact with each independently.  When I enable
the adapter in the test.vbox file, Genode still boots, the GUI starts, the
two windows appear but they remain stubbornly blank.

Here is what I tried to fix this:
I have increased the caps on the nic_bridge to 200
increased the caps on the vbox1 and vbox2 to 500
I removed the nic bridge from the config for one vbox in the virtualbox.run
(to see if I can get video on even one)

Nothing I changed made any difference.  As long as enabled="true" was set,
neither VM loaded at all.  This is booting from physical hardware, not in a
virtualized environment.  Thoughts?

On Mon, Jun 19, 2017 at 3:13 AM, Alexander Boettcher <
alexander.boettc...@genode-labs.com> wrote:

> Hello,
>
> On 16.06.2017 20:20, Chris Rothrock wrote:
> > I'm working with the VirtualBox run recipe where I have set use_net 1,
> > added the repositories for dde_linux and dde_ipxe for the NIC driver.
> The
> > build runs successfully and I can boot to the 2 virtual machines (I have
> > one as TinyCore and one as DSL - Damn Small Linux) and interact with the
> > OS's but there is no network adapter detected.  To resolve this, I set
> the
> > Adapter 0 to enabled in the test.vbox file but when I do this, I no
> longer
> > get any video within the frame for the virtual machine (not even the
> > bootloader for that frame).  I have tried increasing the cap for the
> > virtualbox module in the virtualbox.run recipe but still have nothing.  I
> > need to be able to test network communication between the two virtual
> > machines (and to the wider network on which they are attached) but this
> is
> > preventing me from this process.  Any help in this would be greatly
> > appreciated.
>
> please try the commit named "virtualbox.run: support network for
> multiple VMs" of [0].
>
> Mainly, the configuration used in test.vbox is wrong for Genode and in
> the virtualbox.run script a Network multiplexer is missing, to run 2 VMs
> with network.
>
> [0] https://github.com/alex-ab/genode/commits/staging_vbox_run
>
> Hope it helps,
>
> --
> Alexander Boettcher
> Genode Labs
>
> http://www.genode-labs.com - http://www.genode.org
>
> Genode Labs GmbH - Amtsgericht Dresden - HRB 28424 - Sitz Dresden
> Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
>



-- 


Thank You,

Chris Rothrock
Senior System Administrator
(315) 308-1637
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
genode-main mailing list
genode-main@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/genode-main


Re: Networking Support in VirtualBox

2017-06-19 Thread Alexander Boettcher
Hello,

On 16.06.2017 20:20, Chris Rothrock wrote:
> I'm working with the VirtualBox run recipe where I have set use_net 1,
> added the repositories for dde_linux and dde_ipxe for the NIC driver.  The
> build runs successfully and I can boot to the 2 virtual machines (I have
> one as TinyCore and one as DSL - Damn Small Linux) and interact with the
> OS's but there is no network adapter detected.  To resolve this, I set the
> Adapter 0 to enabled in the test.vbox file but when I do this, I no longer
> get any video within the frame for the virtual machine (not even the
> bootloader for that frame).  I have tried increasing the cap for the
> virtualbox module in the virtualbox.run recipe but still have nothing.  I
> need to be able to test network communication between the two virtual
> machines (and to the wider network on which they are attached) but this is
> preventing me from this process.  Any help in this would be greatly
> appreciated.

please try the commit named "virtualbox.run: support network for
multiple VMs" of [0].

Mainly, the configuration used in test.vbox is wrong for Genode and in
the virtualbox.run script a Network multiplexer is missing, to run 2 VMs
with network.

[0] https://github.com/alex-ab/genode/commits/staging_vbox_run

Hope it helps,

-- 
Alexander Boettcher
Genode Labs

http://www.genode-labs.com - http://www.genode.org

Genode Labs GmbH - Amtsgericht Dresden - HRB 28424 - Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
genode-main mailing list
genode-main@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/genode-main


Networking Support in VirtualBox

2017-06-16 Thread Chris Rothrock
I'm working with the VirtualBox run recipe where I have set use_net 1,
added the repositories for dde_linux and dde_ipxe for the NIC driver.  The
build runs successfully and I can boot to the 2 virtual machines (I have
one as TinyCore and one as DSL - Damn Small Linux) and interact with the
OS's but there is no network adapter detected.  To resolve this, I set the
Adapter 0 to enabled in the test.vbox file but when I do this, I no longer
get any video within the frame for the virtual machine (not even the
bootloader for that frame).  I have tried increasing the cap for the
virtualbox module in the virtualbox.run recipe but still have nothing.  I
need to be able to test network communication between the two virtual
machines (and to the wider network on which they are attached) but this is
preventing me from this process.  Any help in this would be greatly
appreciated.


-- 


Thank You,

Chris Rothrock
Senior System Administrator
(315) 308-1637
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
genode-main mailing list
genode-main@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/genode-main