After rebooting, acloud local is working! Thank you Kevin for your help.

run_cvd E 03-31 13:33:23  7602  7602 users.cpp:48] Group virtaccess does 
not exist
run_cvd I 03-31 13:33:23  7602  7602 main.cc:367] The following files 
contain useful debugging information:
run_cvd I 03-31 13:33:23  7602  7602 main.cc:369]   Launcher log: 
/tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/launcher.log
run_cvd I 03-31 13:33:23  7602  7602 main.cc:371]   Android's logcat 
output: /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/logcat
run_cvd I 03-31 13:33:23  7602  7602 main.cc:372]   Kernel log: 
/tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/kernel.log
run_cvd I 03-31 13:33:23  7602  7602 main.cc:373]   Instance configuration: 
/tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/cuttlefish_config.json
run_cvd I 03-31 13:33:23  7602  7602 main.cc:374]   Instance environment: 
/tmp/acloud_cvd_temp/local-instance-1/.cuttlefish.sh
run_cvd I 03-31 13:33:23  7602  7602 main.cc:375] To access the console 
run: socat file:$(tty),raw,echo=0 
/tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/console
run_cvd I 03-31 13:33:55  7602  7602 main.cc:191] Virtual device booted 
successfully
run_cvd I 03-31 13:33:55  7602  7602 main.cc:198] 
VIRTUAL_DEVICE_BOOT_COMPLETED
launch_cvd I 03-31 13:33:55  7552  7552 launch_cvd.cc:211] run_cvd exited 
successfully.
OK! (33s)
Remote terminal can't support VNC. Skipping VNC startup.
Total time:  (33s)


Device summary:
 - device serial: 127.0.0.1:6520 (local-instance-1[127.0.0.1:6520])
   export ANDROID_SERIAL=127.0.0.1:6520

On Tuesday, March 31, 2020 at 3:51:39 AM UTC+11, Kevin Cheng wrote:
>
> We've seen the permissions errors go away after a reboot, have you reboot 
> your host after running `acloud setup --host`?
>
> On Mon, Mar 30, 2020 at 7:58 AM Dean Wheatley <[email protected] 
> <javascript:>> wrote:
>
>> Hi Kevin, the specific "acloud setup --host" is fixed.
>>
>> However, "acloud create --local-image --local-instance" is failing:
>>
>> $ acloud create --local-image --local-instance
>> ...
>>
>> launch_cvd I 03-30 15:12:18 2529578 2529578 launch_cvd.cc:184] 
>> assemble_cvd exited successfully.
>> launch_cvd I 03-30 15:12:18 2529578 2529578 subprocess.cpp:286] Started 
>> (pid: 2529630): /home/$USER/master/out/host/linux-x86/bin/run_cvd
>> launch_cvd I 03-30 15:12:18 2529578 2529578 subprocess.cpp:289] 
>> --undefok=report_anonymous_usage_stats
>> run_cvd I 03-30 15:12:18 2529630 2529630 subprocess.cpp:286] Started 
>> (pid: 2529637): /bin/bash
>> run_cvd I 03-30 15:12:18 2529630 2529630 subprocess.cpp:289] -c
>> run_cvd I 03-30 15:12:18 2529630 2529630 subprocess.cpp:289] egrep -h -e 
>> "^iff:.*" /proc/*/fdinfo/*
>> run_cvd E 03-30 15:12:18 2529630 2529630 users.cpp:48] Group virtaccess 
>> does not exist
>> run_cvd I 03-30 15:12:18 2529630 2529630 main.cc:367] The following files 
>> contain useful debugging information:
>> run_cvd I 03-30 15:12:18 2529630 2529630 main.cc:369]   Launcher log: 
>> /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/launcher.log
>> run_cvd I 03-30 15:12:18 2529630 2529630 main.cc:371]   Android's logcat 
>> output: /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/logcat
>> run_cvd I 03-30 15:12:18 2529630 2529630 main.cc:372]   Kernel log: 
>> /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/kernel.log
>> run_cvd I 03-30 15:12:18 2529630 2529630 main.cc:373]   Instance 
>> configuration: 
>> /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/cuttlefish_config.json
>> run_cvd I 03-30 15:12:18 2529630 2529630 main.cc:374]   Instance 
>> environment: /tmp/acloud_cvd_temp/local-instance-1/.cuttlefish.sh
>> run_cvd I 03-30 15:12:18 2529630 2529630 main.cc:375] To access the 
>> console run: socat file:$(tty),raw,echo=0 
>> /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/console
>>
>> I have attached cuttlefish.zip containing the "useful debugging 
>> information" files.
>>
>> Would you able to look into what is the issue?
>>
>> launcher.log mentions:
>>
>> [ERROR:src/main.rs:1281] The architecture failed to build the vm: error 
>> creating devices: failed to set up virtual socket device: failed to open 
>> vhost device: failed to open vhost device: Permission denied (os error 13)
>>
>> Thank you.
>>
>> On Friday, March 20, 2020 at 10:25:36 AM UTC+11, Kevin Cheng wrote:
>>
>>> hey Dean, wanted to check in to see if you were still having issues 
>>> after the fix was merged.
>>>
>>> On Thursday, March 12, 2020 at 12:50:35 PM UTC-7, Kevin Cheng wrote:
>>>>
>>>> Thanks for reporting that Dean, there's a pull request to fix it like 
>>>> you recommended (https://github.com/google/android-cuttlefish/pull/45 
>>>> <https://www.google.com/url?q=https://github.com/google/android-cuttlefish/pull/45&sa=D&usg=AFQjCNHydADgqTlAb0XM-XVTB4u6xnGs9g>
>>>> ).
>>>>
>>>> On Wed, Mar 11, 2020 at 10:22 PM Dean Wheatley <[email protected]> wrote:
>>>>
>>>>> Thanks Kevin.
>>>>>
>>>>> "acloud setup --host" failed, but provided reason:
>>>>>
>>>>> ...
>>>>>
>>>>> Start to install cuttlefish-common :
>>>>> git clone https://github.com/google/android-cuttlefish.git 
>>>>> /tmp/tmpT5OIau/cf-common
>>>>> cd /tmp/tmpT5OIau/cf-common
>>>>> yes | sudo mk-build-deps -i -r -B
>>>>> dpkg-buildpackage -uc -us
>>>>> sudo apt-get install -y -f ../cuttlefish-common_*_amd64.deb
>>>>> Press 'y' to continue or anything else to do it myself and run acloud 
>>>>> again[y/N]: y
>>>>> Run command: git clone 
>>>>> https://github.com/google/android-cuttlefish.git 
>>>>> /tmp/tmpT5OIau/cf-common
>>>>> cd /tmp/tmpT5OIau/cf-common
>>>>> yes | sudo mk-build-deps -i -r -B
>>>>> dpkg-buildpackage -uc -us
>>>>> sudo apt-get install -y -f ../cuttlefish-common_*_amd64.deb
>>>>> Cloning into '/tmp/tmpT5OIau/cf-common'...
>>>>> remote: Enumerating objects: 37, done.
>>>>> remote: Counting objects: 100% (37/37), done.
>>>>> remote: Compressing objects: 100% (29/29), done.
>>>>> remote: Total 2361 (delta 16), reused 19 (delta 8), pack-reused 2324
>>>>> Receiving objects: 100% (2361/2361), 623.77 KiB | 780.00 KiB/s, done.
>>>>> Resolving deltas: 100% (1338/1338), done.
>>>>> [sudo] password for dwhea: 
>>>>> mk-build-deps: warning:     debian/changelog(l13): badly formatted 
>>>>> trailer line
>>>>> LINE:  -- Alistair Delva <[email protected]> Tue, 03 Mar 2020 
>>>>> 03:18:25 -0800
>>>>> Starting pkgProblemResolver with broken count: 0
>>>>> Starting 2 pkgProblemResolver with broken count: 0
>>>>> Done
>>>>> yes: standard output: Broken pipe
>>>>> dpkg-buildpackage: warning:     debian/changelog(l13): badly formatted 
>>>>> trailer line
>>>>> LINE:  -- Alistair Delva <[email protected]> Tue, 03 Mar 2020 
>>>>> 03:18:25 -0800
>>>>>  dpkg-source --before-build .
>>>>> dpkg-source: warning: cf-common/debian/changelog(l13): badly formatted 
>>>>> trailer line
>>>>> LINE:  -- Alistair Delva <[email protected]> Tue, 03 Mar 2020 
>>>>> 03:18:25 -0800
>>>>>  fakeroot debian/rules clean
>>>>>  dpkg-source -b .
>>>>> dpkg-source: warning: cf-common/debian/changelog(l13): badly formatted 
>>>>> trailer line
>>>>> LINE:  -- Alistair Delva <[email protected]> Tue, 03 Mar 2020 
>>>>> 03:18:25 -0800
>>>>>  debian/rules build
>>>>>  fakeroot debian/rules binary
>>>>> dpkg-parsechangelog: warning:     debian/changelog(l13): badly 
>>>>> formatted trailer line
>>>>> LINE:  -- Alistair Delva <[email protected]> Tue, 03 Mar 2020 
>>>>> 03:18:25 -0800
>>>>> dh_installchangelogs: warning:     debian/changelog(l13): badly 
>>>>> formatted trailer line
>>>>> LINE:  -- Alistair Delva <[email protected]> Tue, 03 Mar 2020 
>>>>> 03:18:25 -0800
>>>>> dh_installchangelogs: warning:     debian/changelog(l13): badly 
>>>>> formatted trailer line
>>>>> LINE:  -- Alistair Delva <[email protected]> Tue, 03 Mar 2020 
>>>>> 03:18:25 -0800
>>>>> dpkg-gencontrol: warning:     debian/changelog(l13): badly formatted 
>>>>> trailer line
>>>>> LINE:  -- Alistair Delva <[email protected]> Tue, 03 Mar 2020 
>>>>> 03:18:25 -0800
>>>>> dpkg-gencontrol: warning:     debian/changelog(l13): badly formatted 
>>>>> trailer line
>>>>> LINE:  -- Alistair Delva <[email protected]> Tue, 03 Mar 2020 
>>>>> 03:18:25 -0800
>>>>> dpkg-gencontrol: warning: package cuttlefish-integration: substitution 
>>>>> variable ${shlibs:Depends} unused, but is defined
>>>>> dpkg-gencontrol: warning:     debian/changelog(l13): badly formatted 
>>>>> trailer line
>>>>> LINE:  -- Alistair Delva <[email protected]> Tue, 03 Mar 2020 
>>>>> 03:18:25 -0800
>>>>> dpkg-gencontrol: warning: package cuttlefish-integration: substitution 
>>>>> variable ${shlibs:Depends} unused, but is defined
>>>>>  dpkg-genbuildinfo
>>>>> dpkg-genbuildinfo: warning:     debian/changelog(l13): badly formatted 
>>>>> trailer line
>>>>> LINE:  -- Alistair Delva <[email protected]> Tue, 03 Mar 2020 
>>>>> 03:18:25 -0800
>>>>> dpkg-genbuildinfo: warning:     debian/changelog(l13): badly formatted 
>>>>> trailer line
>>>>> LINE:  -- Alistair Delva <[email protected]> Tue, 03 Mar 2020 
>>>>> 03:18:25 -0800
>>>>>  dpkg-genchanges  >../cuttlefish-common_0.9.13_amd64.changes
>>>>> dpkg-genchanges: warning:     debian/changelog(l13): badly formatted 
>>>>> trailer line
>>>>> LINE:  -- Alistair Delva <[email protected]> Tue, 03 Mar 2020 
>>>>> 03:18:25 -0800
>>>>> dpkg-genchanges: warning:     debian/changelog(l13): badly formatted 
>>>>> trailer line
>>>>> LINE:  -- Alistair Delva <[email protected]> Tue, 03 Mar 2020 
>>>>> 03:18:25 -0800
>>>>> dpkg-genchanges: info: including full source code in upload
>>>>>  dpkg-source --after-build .
>>>>> dpkg-source: warning: cf-common/debian/changelog(l13): badly formatted 
>>>>> trailer line
>>>>> LINE:  -- Alistair Delva <[email protected]> Tue, 03 Mar 2020 
>>>>> 03:18:25 -0800
>>>>> E: Unable to correct problems, you have held broken packages.
>>>>> Traceback (most recent call last):
>>>>>   File "/tmp/Soong.python_OdiCog/acloud/public/acloud_main.py", line 
>>>>> 431, in <module>
>>>>>     EXIT_CODE, EXCEPTION_STACKTRACE = main(sys.argv[1:])
>>>>>   File "/tmp/Soong.python_OdiCog/acloud/public/acloud_main.py", line 
>>>>> 410, in main
>>>>>     setup.Run(args)
>>>>>   File "/tmp/Soong.python_OdiCog/acloud/setup/setup.py", line 68, in 
>>>>> Run
>>>>>     subtask.Run(force_setup=args.force)
>>>>>   File "/tmp/Soong.python_OdiCog/acloud/setup/base_task_runner.py", 
>>>>> line 99, in Run
>>>>>     self._Run()
>>>>>   File "/tmp/Soong.python_OdiCog/acloud/setup/host_setup_runner.py", 
>>>>> line 153, in _Run
>>>>>     setup_common.CheckCmdOutput(cmd, shell=True)
>>>>>   File "/tmp/Soong.python_OdiCog/acloud/setup/setup_common.py", line 
>>>>> 53, in CheckCmdOutput
>>>>>     return subprocess.check_output(cmd, **kwargs)
>>>>>   File "/usr/lib/python2.7/subprocess.py", line 223, in check_output
>>>>>     raise CalledProcessError(retcode, cmd, output=output)
>>>>> subprocess.CalledProcessError: Command 'git clone 
>>>>> https://github.com/google/android-cuttlefish.git 
>>>>> /tmp/tmpT5OIau/cf-common
>>>>> cd /tmp/tmpT5OIau/cf-common
>>>>> yes | sudo mk-build-deps -i -r -B
>>>>> dpkg-buildpackage -uc -us
>>>>> sudo apt-get install -y -f ../cuttlefish-common_*_amd64.deb' returned 
>>>>> non-zero exit status 100
>>>>>
>>>>>
>>>>> The last command is failing on my distribution (Ubuntu 20.04):
>>>>>
>>>>> $ sudo apt-get install -y -f ../cuttlefish-common_*_amd64.deb
>>>>> Reading package lists... Done
>>>>> Building dependency tree       
>>>>> Reading state information... Done
>>>>> Note, selecting 'cuttlefish-common' instead of 
>>>>> '../cuttlefish-common_0.9.13_amd64.deb'
>>>>> Some packages could not be installed. This may mean that you have
>>>>> requested an impossible situation or if you are using the unstable
>>>>> distribution that some required packages have not yet been created
>>>>> or been moved out of Incoming.
>>>>> The following information may help to resolve the situation:
>>>>>
>>>>> The following packages have unmet dependencies:
>>>>>  cuttlefish-common : Depends: python but it is not installable
>>>>> E: Unable to correct problems, you have held broken packages.
>>>>>
>>>>>
>>>>> This is likely related to Ubuntu 20.04 removing "python" package from 
>>>>>  Python2. See https://github.com/RadeonOpenCompute/ROC-smi/issues/83 
>>>>> for similar issue.
>>>>>
>>>>> Perhaps changing cuttlefish-common project deb package dependency from 
>>>>> python to version specific python dependency (python3 if supported, 
>>>>> python2 
>>>>> otherwise) as well as the cuttlefish-common script shebangs would fix 
>>>>> this.
>>>>>
>>>>> Thanks,
>>>>>
>>>>>
>>>>> On Wednesday, March 11, 2020 at 6:40:27 PM UTC+11, Kevin Cheng wrote:
>>>>>>
>>>>>> Ah, could you try running `acloud setup --host`?
>>>>>>
>>>>>> That should install some required packages (I was expecting acloud to 
>>>>>> automatically kick off setup the first time you run `acloud create ...` 
>>>>>> but 
>>>>>> maybe we have a bug there).
>>>>>>
>>>>>> And after you run `acloud setup --host`, you will need to reboot your 
>>>>>> host (I'm sorry) for some changes to take.
>>>>>>
>>>>>> If that still doesn't work, could you provide your system's details? 
>>>>>> (OS and version). Thanks for bearing with me on this and sorry this has 
>>>>>> been such a hassle.
>>>>>>
>>>>>>
>>>>>> Kevin
>>>>>>
>>>>>> On Tue, Mar 10, 2020 at 4:33 PM Dean Wheatley <[email protected]> 
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Kevin, rebooting didn't help.
>>>>>>>
>>>>>>> After reboot, I went to AOSP master repo, and ran:
>>>>>>>
>>>>>>> $ source build/envsetup.sh
>>>>>>> $ lunch aosp_cf_x86_phone-userdebug
>>>>>>> $ acloud create --local-image --local-instance
>>>>>>>
>>>>>>> When you say "running setup" needs to be done before creating the 
>>>>>>> local instance, what exactly do you mean by "running setup"? The only 
>>>>>>> steps 
>>>>>>> I've done are source -> lunch -> make then acloud create. Are there 
>>>>>>> more 
>>>>>>> steps (e.g. related to group virtacces that needs to be done)?
>>>>>>>
>>>>>>> Dean
>>>>>>>
>>>>>>> On Tuesday, March 10, 2020 at 5:37:03 PM UTC+11, Kevin Cheng wrote:
>>>>>>>>
>>>>>>>> Thanks for reporting this Dean, we're in the process of removing 
>>>>>>>> the conflicting message to avoid future confusion.
>>>>>>>>
>>>>>>>> As for the error that you're running into, could you try rebooting 
>>>>>>>> your host to see if that helps? We've recently updated the setup flow 
>>>>>>>> and 
>>>>>>>> have realized that running setup and then proceeding to create a local 
>>>>>>>> instance without a reboot runs into a similar issue that you've 
>>>>>>>> encountered 
>>>>>>>> that seems to be alleviated with a host reboot.
>>>>>>>>
>>>>>>>> Kevin
>>>>>>>>
>>>>>>>> On Wednesday, March 4, 2020 at 3:13:31 PM UTC-8 Dean Wheatley wrote:
>>>>>>>>
>>>>>>>>> Following 
>>>>>>>>> https://source.android.com/setup/start#create_acloud_instance 
>>>>>>>>> fails
>>>>>>>>>
>>>>>>>>> $ acloud create --local-image --local-instance
>>>>>>>>>
>>>>>>>>> ==================
>>>>>>>>> Notice:
>>>>>>>>>   We collect anonymous usage statistics in accordance with our 
>>>>>>>>> Content Licenses (https://source.android.com/setup/start/licenses), 
>>>>>>>>> Contributor License Agreement (
>>>>>>>>> https://opensource.google.com/docs/cla/), Privacy Policy (
>>>>>>>>> https://policies.google.com/privacy) and Terms of Service (
>>>>>>>>> https://policies.google.com/terms).
>>>>>>>>> ==================
>>>>>>>>>
>>>>>>>>> Creating local AVD instance with the following details:
>>>>>>>>> Image (local):
>>>>>>>>>   /home/$USER/master/out/target/product/vsoc_x86
>>>>>>>>> hw config:
>>>>>>>>>   cpu - 2
>>>>>>>>>   ram - 4GB
>>>>>>>>>   display - 720x1280 (320 DPI)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> (Disclaimer: Local cuttlefish instance is not a fully supported
>>>>>>>>> runtime configuration, fixing breakages is on a best effort SLO.)
>>>>>>>>>
>>>>>>>>> Waiting for AVD(s) to boot up ...launch_cvd I 03-05 09:44:32 
>>>>>>>>> 1114258 1114258 subprocess.cpp:286] Started (pid: 1114281): /home/
>>>>>>>>> $USER/master/out/host/linux-x86/bin/assemble_cvd
>>>>>>>>> launch_cvd I 03-05 09:44:32 1114258 1114258 subprocess.cpp:289] 
>>>>>>>>> --memory_mb=4096
>>>>>>>>> launch_cvd I 03-05 09:44:32 1114258 1114258 subprocess.cpp:289] 
>>>>>>>>> --run_adb_connector=true
>>>>>>>>> launch_cvd I 03-05 09:44:32 1114258 1114258 subprocess.cpp:289] 
>>>>>>>>> --system_image_dir=/home/$USER/master/out/target/product/vsoc_x86
>>>>>>>>> launch_cvd I 03-05 09:44:32 1114258 1114258 subprocess.cpp:289] 
>>>>>>>>> --cpus=2
>>>>>>>>> launch_cvd I 03-05 09:44:32 1114258 1114258 subprocess.cpp:289] 
>>>>>>>>> --daemon=true
>>>>>>>>> launch_cvd I 03-05 09:44:32 1114258 1114258 subprocess.cpp:289] 
>>>>>>>>> --dpi=320
>>>>>>>>> launch_cvd I 03-05 09:44:32 1114258 1114258 subprocess.cpp:289] 
>>>>>>>>> --instance_dir=/tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime
>>>>>>>>> launch_cvd I 03-05 09:44:32 1114258 1114258 subprocess.cpp:289] 
>>>>>>>>> --x_res=720
>>>>>>>>> launch_cvd I 03-05 09:44:32 1114258 1114258 subprocess.cpp:289] 
>>>>>>>>> --y_res=1280
>>>>>>>>> assemble_cvd E 03-05 09:44:33 1114281 1114281 
>>>>>>>>> fetcher_config.cpp:212] Could not find file ending in kernel
>>>>>>>>> assemble_cvd E 03-05 09:44:33 1114281 1114281 
>>>>>>>>> fetcher_config.cpp:212] Could not find file ending in initramfs.img
>>>>>>>>> assemble_cvd W 03-05 09:44:33 1114281 1114281 flags.cc:737] 
>>>>>>>>> Requested resuming a previous session (the default behavior) but the 
>>>>>>>>> base 
>>>>>>>>> images have changed under the overlay, making the overlay 
>>>>>>>>> incompatible. 
>>>>>>>>> Wiping the overlay files.
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 flags.cc:547] 
>>>>>>>>> Assuming prior files of 
>>>>>>>>> /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_assembly 
>>>>>>>>> /tmp/acloud_cvd_temp/local-instance-1/.cuttlefish.sh 
>>>>>>>>> /tmp/acloud_cvd_temp/local-instance-1/.cuttlefish_config.json 
>>>>>>>>> /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1 
>>>>>>>>> /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime/* 
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 flags.cc:755] 
>>>>>>>>> Setting up /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_assembly
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 flags.cc:765] 
>>>>>>>>> Setting up 
>>>>>>>>> /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_assembly/disk_hole/disk_hole
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 flags.cc:776] 
>>>>>>>>> Setting up /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.N
>>>>>>>>> assemble_cvd E 03-05 09:44:33 1114281 1114281 
>>>>>>>>> fetcher_config.cpp:212] Could not find file ending in kernel
>>>>>>>>> assemble_cvd E 03-05 09:44:33 1114281 1114281 
>>>>>>>>> fetcher_config.cpp:212] Could not find file ending in initramfs.img
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 data_image.cc:144] 
>>>>>>>>> misc partition image: use existing
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 data_image.cc:134] 
>>>>>>>>> /home/$USER/master/out/target/product/vsoc_x86/userdata.img 
>>>>>>>>> exists. Not creating it.
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 data_image.cc:67] 
>>>>>>>>> Creating 
>>>>>>>>> /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/access-kregistry
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:286] 
>>>>>>>>> Started (pid: 1114288): /bin/dd
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:289] 
>>>>>>>>> if=/dev/zero
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:289] 
>>>>>>>>> of=/tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/access-kregistry
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:289] 
>>>>>>>>> bs=64K
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:289] 
>>>>>>>>> count=1
>>>>>>>>> 1+0 records in
>>>>>>>>> 1+0 records out
>>>>>>>>> 65536 bytes (66 kB, 64 KiB) copied, 0.000400425 s, 164 MB/s
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 
>>>>>>>>> image_aggregator.cc:51] Examining super
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 
>>>>>>>>> image_aggregator.cc:74] was not sparse
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 
>>>>>>>>> image_aggregator.cc:76] size was 6442450944
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 
>>>>>>>>> image_aggregator.cc:51] Examining userdata
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 
>>>>>>>>> image_aggregator.cc:74] was not sparse
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 
>>>>>>>>> image_aggregator.cc:76] size was 4563402752
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 
>>>>>>>>> image_aggregator.cc:51] Examining cache
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 
>>>>>>>>> image_aggregator.cc:74] was not sparse
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 
>>>>>>>>> image_aggregator.cc:76] size was 67108864
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 
>>>>>>>>> image_aggregator.cc:51] Examining metadata
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 
>>>>>>>>> image_aggregator.cc:74] was not sparse
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 
>>>>>>>>> image_aggregator.cc:76] size was 16777216
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 
>>>>>>>>> image_aggregator.cc:51] Examining boot
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 
>>>>>>>>> image_aggregator.cc:74] was not sparse
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 
>>>>>>>>> image_aggregator.cc:76] size was 21811200
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 
>>>>>>>>> image_aggregator.cc:51] Examining misc
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 
>>>>>>>>> image_aggregator.cc:74] was not sparse
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 
>>>>>>>>> image_aggregator.cc:76] size was 1048576
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:286] 
>>>>>>>>> Started (pid: 1114289): /home/$USER
>>>>>>>>> /master/out/host/linux-x86/bin/cf_bpttool
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:289] 
>>>>>>>>> make_table
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:289] 
>>>>>>>>> --input=/dev/stdin
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:289] 
>>>>>>>>> --output_json=/dev/stdout
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:286] 
>>>>>>>>> Started (pid: 1114291): /home/$USER
>>>>>>>>> /master/out/host/linux-x86/bin/cf_bpttool
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:289] 
>>>>>>>>> make_table
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:289] 
>>>>>>>>> --input=/dev/stdin
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:289] 
>>>>>>>>> --output_gpt=/dev/stdout
>>>>>>>>> assemble_cvd W 03-05 09:44:33 1114281 1114281 flags.cc:908] 
>>>>>>>>> Requested to continue an existing session, but the overlay was newer 
>>>>>>>>> than 
>>>>>>>>> its underlying composite disk. Wiping the overlay.
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:286] 
>>>>>>>>> Started (pid: 1114297): /home/$USER
>>>>>>>>> /master/out/host/linux-x86/bin/crosvm
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:289] 
>>>>>>>>> create_qcow2
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:289] 
>>>>>>>>> --backing_file=/tmp/acloud_cvd_temp/local-instance-1/cuttlefish_assembly/composite.img
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:289] 
>>>>>>>>> /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/overlay.img
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 data_image.cc:67] 
>>>>>>>>> Creating 
>>>>>>>>> /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/access-kregistry
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:286] 
>>>>>>>>> Started (pid: 1114301): /bin/dd
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:289] 
>>>>>>>>> if=/dev/zero
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:289] 
>>>>>>>>> of=/tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/access-kregistry
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:289] 
>>>>>>>>> bs=64K
>>>>>>>>> assemble_cvd I 03-05 09:44:33 1114281 1114281 subprocess.cpp:289] 
>>>>>>>>> count=1
>>>>>>>>> 1+0 records in
>>>>>>>>> 1+0 records out
>>>>>>>>> 65536 bytes (66 kB, 64 KiB) copied, 0.000446169 s, 147 MB/s
>>>>>>>>> launch_cvd I 03-05 09:44:33 1114258 1114258 launch_cvd.cc:162] 
>>>>>>>>> assemble_cvd exited successfully.
>>>>>>>>> launch_cvd I 03-05 09:44:33 1114258 1114258 subprocess.cpp:286] 
>>>>>>>>> Started (pid: 1114302): /home//master/out/host/linux-x86/bin/run_cvd
>>>>>>>>> run_cvd I 03-05 09:44:33 1114302 1114302 subprocess.cpp:286] 
>>>>>>>>> Started (pid: 1114309): /bin/bash
>>>>>>>>> run_cvd I 03-05 09:44:33 1114302 1114302 subprocess.cpp:289] -c
>>>>>>>>> run_cvd I 03-05 09:44:33 1114302 1114302 subprocess.cpp:289] egrep 
>>>>>>>>> -h -e "^iff:.*" /proc/*/fdinfo/*
>>>>>>>>> run_cvd E 03-05 09:44:33 1114302 1114302 users.cpp:48] Group 
>>>>>>>>> virtaccess does not exist
>>>>>>>>> run_cvd I 03-05 09:44:33 1114302 1114302 main.cc:367] The 
>>>>>>>>> following files contain useful debugging information:
>>>>>>>>> run_cvd I 03-05 09:44:33 1114302 1114302 main.cc:369]   Launcher 
>>>>>>>>> log: 
>>>>>>>>> /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/launcher.log
>>>>>>>>> run_cvd I 03-05 09:44:33 1114302 1114302 main.cc:371]   Android's 
>>>>>>>>> logcat output: 
>>>>>>>>> /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/logcat
>>>>>>>>> run_cvd I 03-05 09:44:33 1114302 1114302 main.cc:372]   Kernel 
>>>>>>>>> log: 
>>>>>>>>> /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/kernel.log
>>>>>>>>> run_cvd I 03-05 09:44:33 1114302 1114302 main.cc:373]   Instance 
>>>>>>>>> configuration: 
>>>>>>>>> /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/cuttlefish_config.json
>>>>>>>>> run_cvd I 03-05 09:44:33 1114302 1114302 main.cc:374]   Instance 
>>>>>>>>> environment: /tmp/acloud_cvd_temp/local-instance-1/.cuttlefish.sh
>>>>>>>>> run_cvd I 03-05 09:44:33 1114302 1114302 main.cc:375] To access 
>>>>>>>>> the console run: socat file:$(tty),raw,echo=0 
>>>>>>>>> /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/console
>>>>>>>>>
>>>>>>>>> The error mentions the group "virtaccess" does not exist.
>>>>>>>>>
>>>>>>>>> The 
>>>>>>>>> /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime/launcher.log 
>>>>>>>>> has 
>>>>>>>>> the following errors:
>>>>>>>>>
>>>>>>>>> [ERROR:src/main.rs:1248] The architecture failed to build the vm: 
>>>>>>>>> error creating devices: failed to set up virtual socket device: 
>>>>>>>>> failed to 
>>>>>>>>> open vhost device: failed to open vhost device: Permission denied (os 
>>>>>>>>> error 
>>>>>>>>> 13)>
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 process_monitor.cc:139] 
>>>>>>>>> Detected exit of monitored subprocess
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 process_monitor.cc:147] 
>>>>>>>>> Subprocess /home/$USER/master/out/host/linux-x86/bin/crosvm (1114415) 
>>>>>>>>> has 
>>>>>>>>> exited with exit code 1
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 subprocess.cpp:286] 
>>>>>>>>> Started (pid: 1114421): 
>>>>>>>>> /home/$USER/master/out/host/linux-x86/bin/crosvm
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 subprocess.cpp:289] run
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 subprocess.cpp:289] 
>>>>>>>>> --initrd=/tmp/acloud_cvd_temp/local-instance-1/cuttlefish_assembly/ramdisk.img.concat
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 subprocess.cpp:289] 
>>>>>>>>> --null-audio
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 subprocess.cpp:289] 
>>>>>>>>> --mem=4096
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 subprocess.cpp:289] 
>>>>>>>>> --cpus=2
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 subprocess.cpp:289] 
>>>>>>>>> --params= printk.devkmsg=on firmware_class.path=/vendor/etc/ 
>>>>>>>>> init=/init 
>>>>>>>>> androidboot.hardware=cutf_cvm security=selinux 
>>>>>>>>> androidboot.console=tty>
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 subprocess.cpp:289] 
>>>>>>>>> --rwdisk=/tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/overlay.img
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 subprocess.cpp:289] 
>>>>>>>>> --socket=/tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/internal/crosvm_control.sock
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 subprocess.cpp:289] 
>>>>>>>>> --single-touch=/tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/internal/touch.sock:720:1280
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 subprocess.cpp:289] 
>>>>>>>>> --keyboard=/tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/internal/keyboard.sock
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 subprocess.cpp:289] 
>>>>>>>>> --tap-fd=36
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 subprocess.cpp:289] 
>>>>>>>>> --tap-fd=38
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 subprocess.cpp:289] 
>>>>>>>>> --rw-pmem-device=/tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/access-kregistry
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 subprocess.cpp:289] 
>>>>>>>>> --disable-sandbox
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 subprocess.cpp:289] 
>>>>>>>>> --cid=3
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 subprocess.cpp:289] 
>>>>>>>>> --serial=num=1,type=file,path=/tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/internal/kernel-log-pipe,console=true
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 subprocess.cpp:289] 
>>>>>>>>> --serial=num=2,type=file,path=/tmp/acloud_cvd_temp/local-instance-1/cuttlefish_runtime.1/internal/console-pipe,stdin=true
>>>>>>>>> run_cvd I 03-05 09:44:34 1114312 1114313 subprocess.cpp:289] 
>>>>>>>>> /tmp/acloud_cvd_temp/local-instance-1/cuttlefish_assembly/kernel
>>>>>>>>> vnc_server I 03-05 09:44:34 1114318 1114319 
>>>>>>>>> virtual_inputs.cpp:333] connected to touch
>>>>>>>>> vnc_server I 03-05 09:44:34 1114318 1114319 
>>>>>>>>> virtual_inputs.cpp:337] connected to keyboard
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> There's conflicting messages whether this is supported: one the 
>>>>>>>>> one hand 
>>>>>>>>> https://source.android.com/setup/start#create_acloud_instance 
>>>>>>>>> suggests it is. On the other hand, the output says:
>>>>>>>>>
>>>>>>>>> (Disclaimer: Local cuttlefish instance is not a fully supported
>>>>>>>>> runtime configuration, fixing breakages is on a best effort SLO.)
>>>>>>>>>
>>>>>>>>> In any case, I hope you can help regarding this issue.
>>>>>>>>>
>>>>>>>>> Thanks a lot.
>>>>>>>>>
>>>>>>>> -- 
>>>>>>> -- 
>>>>>>> You received this message because you are subscribed to the "Android 
>>>>>>> Building" mailing list.
>>>>>>> To post to this group, send email to [email protected]
>>>>>>> To unsubscribe from this group, send email to
>>>>>>> [email protected]
>>>>>>> For more options, visit this group at
>>>>>>> http://groups.google.com/group/android-building?hl=en
>>>>>>>
>>>>>>> --- 
>>>>>>> You received this message because you are subscribed to a topic in 
>>>>>>> the Google Groups "Android Building" group.
>>>>>>> To unsubscribe from this topic, visit 
>>>>>>> https://groups.google.com/d/topic/android-building/K-dia3GBNV4/unsubscribe
>>>>>>> .
>>>>>>> To unsubscribe from this group and all its topics, send an email to 
>>>>>>> [email protected].
>>>>>>> To view this discussion on the web visit 
>>>>>>> https://groups.google.com/d/msgid/android-building/9ef823a8-ea6c-470e-adb7-2a7e43690621%40googlegroups.com
>>>>>>>  
>>>>>>> <https://groups.google.com/d/msgid/android-building/9ef823a8-ea6c-470e-adb7-2a7e43690621%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>>>> .
>>>>>>>
>>>>>> -- 
>>>>> -- 
>>>>> You received this message because you are subscribed to the "Android 
>>>>> Building" mailing list.
>>>>> To post to this group, send email to [email protected]
>>>>> To unsubscribe from this group, send email to
>>>>> [email protected]
>>>>> For more options, visit this group at
>>>>> http://groups.google.com/group/android-building?hl=en
>>>>>
>>>>> --- 
>>>>> You received this message because you are subscribed to a topic in the 
>>>>> Google Groups "Android Building" group.
>>>>> To unsubscribe from this topic, visit 
>>>>> https://groups.google.com/d/topic/android-building/K-dia3GBNV4/unsubscribe
>>>>> .
>>>>> To unsubscribe from this group and all its topics, send an email to 
>>>>> [email protected].
>>>>> To view this discussion on the web visit 
>>>>> https://groups.google.com/d/msgid/android-building/40acef62-4745-4ff5-81be-8ab86e31f2ee%40googlegroups.com
>>>>>  
>>>>> <https://groups.google.com/d/msgid/android-building/40acef62-4745-4ff5-81be-8ab86e31f2ee%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>> .
>>>>>
>>>> -- 
>> -- 
>> You received this message because you are subscribed to the "Android 
>> Building" mailing list.
>> To post to this group, send email to [email protected] 
>> <javascript:>
>> To unsubscribe from this group, send email to
>> [email protected] <javascript:>
>> For more options, visit this group at
>> http://groups.google.com/group/android-building?hl=en
>>
>> --- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "Android Building" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/android-building/K-dia3GBNV4/unsubscribe
>> .
>> To unsubscribe from this group and all its topics, send an email to 
>> [email protected] <javascript:>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/android-building/0eb0994e-736a-4dc4-99a0-cc2d88adf4fb%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/android-building/0eb0994e-736a-4dc4-99a0-cc2d88adf4fb%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>

-- 
-- 
You received this message because you are subscribed to the "Android Building" 
mailing list.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/android-building?hl=en

--- 
You received this message because you are subscribed to the Google Groups 
"Android Building" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/android-building/c42c9ec9-55a4-4cfb-9294-be9f0de3ff25%40googlegroups.com.

Reply via email to