Hi Andrew, I successfully installed a 3.19 kernel (details at http://dachary.org/?p=3594). It turns out that the loop module is compiled in and defaults to having zero partitions allowed by default. Since I was looking for a solution to have /dev/loop useable for tests, I rebooted with /boot/grub/grub.conf as
[ubuntu@vpm083 src]$ cat /boot/grub/grub.conf
default=0
timeout=5
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
hiddenmenu
title rhel-6.5-cloudinit (3.19.0-ceph-00029-gaf5b96e)
root (hd0,0)
kernel /boot/vmlinuz-3.19.0-ceph-00029-gaf5b96e ro root=LABEL=79d3d2d4
loop.max_part=16
initrd /boot/initramfs-3.19.0-ceph-00029-gaf5b96e.img
and that works fine. Do you know if I could do that from the yaml file directly
? Alternatively I could use a kernel that does not have the loop module
compiled in and modprobe it with loop.max_part=16, but it's unclear to me what
kernels are available and what their names are.
Thanks a lot for the help :-)
On 09/03/2015 14:37, Andrew Schoen wrote:
> Loic,
>
> After locking the node like normal, you can use teuthology to install the
> kernel you need. Just include the kernel stanza in your yaml file.
> http://ceph.com/teuthology/docs/teuthology.task.html#teuthology.task.kernel.task
>
> Something like this:
>
> interactive-on-error: true
>
>
> roles:
> - [mon.0, client.0]
> kernel:
> branch: testing
> tasks:
> - interactive:
>
> Use teuthology-lock —list-targets to get the connection information for you
> newly locked node and add that to your yaml.
>
> Best,
> Andrew
>
> On Mar 8, 2015, at 7:50 AM, Loic Dachary <[email protected]
> <mailto:[email protected]>> wrote:
>
>> Hi Andrew,
>>
>> After successfully locking a centos 6.5 VPS in the community lab with
>>
>> teuthology-lock --lock-many 1 --owner [email protected]
>> <mailto:[email protected]> --machine-type vps --os-type centos --os-version
>> 6.5
>>
>> it turns out that it has a 2.6.32 kernel by default. A more recent kernel is
>> required to run the ceph-disk tests because it relies on /dev/loop handling
>> partition tables as a regular disk would. After installing a 3.10 kernel
>> from http://elrepo.org/tiki/kernel-lt and rebooting, it was no longer
>> possible to reach the machine.
>>
>> The teuthology-suite command has a -k option which suggest there is a way to
>> specify the kernel when provisioning a machine. The command
>>
>> ./virtualenv/bin/teuthology-suite --dry-run -k testing --priority 101
>> --suite rgw --suite-branch firefly --machine-type plana,burnupi,mira
>> --distro ubuntu --email [email protected] <mailto:[email protected]> --owner
>> [email protected] <mailto:[email protected]> --ceph firefly-backports
>>
>> shows lines like:
>>
>> 2015-03-08 13:43:26,432.432 INFO:teuthology.suite:dry-run:
>> ./virtualenv/bin/teuthology-schedule --name
>> loic-2015-03-08_13:43:06-rgw-firefly-backports-testing-basic-multi --num 1
>> --worker multi --priority 101 --owner [email protected]
>> <mailto:[email protected]> --description 'rgw/multifs/{clusters/fixed-2.yaml
>> fs/btrfs.yaml rgw_pool_type/erasure-coded.yaml
>> tasks/rgw_multipart_upload.yaml}' -- /tmp/schedule_suite_AQ2b6w
>> /home/loic/src/ceph-qa-suite_firefly/suites/rgw/multifs/clusters/fixed-2.yaml
>> /home/loic/src/ceph-qa-suite_firefly/suites/rgw/multifs/fs/btrfs.yaml
>> /home/loic/src/ceph-qa-suite_firefly/suites/rgw/multifs/rgw_pool_type/erasure-coded.yaml
>>
>> /home/loic/src/ceph-qa-suite_firefly/suites/rgw/multifs/tasks/rgw_multipart_upload.yaml
>>
>> which show the testing word as part of the job name. The
>> https://github.com/ceph/teuthology/ page shows some more information about
>> kernel choices but it's non trivial to figure out how to translate that into
>> something that could be used in the context of teuthology-lock.
>>
>> I'm not sure where to look and I would be grateful if you could give me a
>> pointer to go in the right direction.
>>
>> Cheers
>>
>> --
>> Loïc Dachary, Artisan Logiciel Libre
>>
>
--
Loïc Dachary, Artisan Logiciel Libre
signature.asc
Description: OpenPGP digital signature
