Please see below.

I did  a new install, with xz , just as in your guide.

The card is made by SanDisk (16G SDHC Card)

root@localhost ioan]# fdisk -l /dev/sdb

Disk /dev/sdb: 15.9 GB, 15931539456 bytes, 31116288 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00081fe2

   Device Boot          Start           End     Blocks   Id  System
/dev/sdb1       1050624         3999743         1474560   bf  Solaris
/dev/sdb4               2048    1050623         524288   83  Linux

Partition table entries are not in disk order
[root@localhost ioan]#


 
> On Nov 15, 2015, at 9:16 PM, Gordan Bobic <[email protected]> wrote:
> 
> On 15/11/15 09:57, ioan stan wrote:
> 
>> Also, since the dreamplug is not that powerful, would it be possible to
>> build the  image based on the minimal server? If somebody wants to add
>> any GUI capability, this could be done later from repository.
> 
> The images are all minimal with no GUI already.
> 
> > I tried to install RSEL on my new and faster SD card. I couldn't
> > complete the install. zfs-fuse is running but I cannot import the
> > zpool.
> 
> Did you initialize the new SD card from the image, or did you dd
> the content of the old card onto the new card?
> 
> What is the make/model of your new card, and where did you buy it?
> 
>> pre-mount:/etc# ps -auxw | grep zfs
>> root 164  0.0  0.5  18656  2616 ? Ssl  00:00   0:00 @bin/zfs-fuse
>> root 232  0.0  0.1   2568   688 ttyS0S+   00:06   0:00 grep zfs
>> 
>> pre-mount:/etc# zpool import
>> pre-mount:/etc# zpool status
>> no pools available
> 
> Are you using the internal uSD card (/dev/sda) or external full size SD card 
> (/dev/sdb)?
> 
> What do you get get from (assuming external full size card):
> # fdisk -l /dev/sdb
> 
> Gordan
> 
>>> On Nov 8, 2015, at 8:04 AM, Gordan Bobic <[email protected] 
>>> <mailto:[email protected]>
>>> <mailto:[email protected] <mailto:[email protected]>>> wrote:
>>> 
>>> For those who share my appreciation of zfs, I will be releasing
>>> a patch to zfs-fuse-dracut package that facilitates better
>>> functioning when it is used for zfs-root.
>>> 
>>> Problem:
>>> zfs-fuse relies on /proc and /dev for importing new pools.
>>> Unfortunately, when it runs from initramfs, systemd, tears down
>>> all of it's contents - the only file handles that remain
>>> available to zfs-fuse are the ones it already has open at the
>>> point the tear-down happens.
>>> 
>>> Workaround:
>>> Don't use initramfs. The problem is that this means we need a
>>> different pre-root environment for zfs-fuse to run in. What we
>>> can use is squashfs running from a raw partition. That does me
>>> we need an additional partition on the disk, but that is hardly
>>> the end of the world.
>>> 
>>> We simply unpack the initramfs generated, and re-make it as a
>>> squashfs.
>>> 
>>> So what we do is we pass to the kernel:
>>> root=/dev/werever_we_put_squashfs_partition
>>> 
>>> That gets is booting the pre-root environment, but then we have
>>> to do something for the pre-root environment to mount the real
>>> rootfs. This is where the new zfs-fuse-dracut patch comes in.
>>> It now understands a kernel boot paramter zfsroot=pool/fs, which
>>> it will use in preference to root= to determine what FS to use
>>> as the rootfs.
>>> 
>>> Effect:
>>> Because the rootfs is read-only squashfs it doesn't get torn
>>> down, and zfs-fuse remains fully functional, able to import
>>> new pools (e.g. from removable media).
>>> 
>>> Side effect:
>>> Init fs ends up showing up mounted on /mnt.
>>> 
>>> I will put together a more detailed howto on the conversion of
>>> initramfs to squashfs for those that need it once I have
>>> released the zfs-fuse-dracut patch.
>>> 
>>> Gordan
>>> _______________________________________________
>>> users mailing list
>>> [email protected] <mailto:[email protected]> 
>>> <mailto:[email protected] <mailto:[email protected]>>
>>> https://lists.redsleeve.org/mailman/listinfo/users 
>>> <https://lists.redsleeve.org/mailman/listinfo/users>
>> 
>> 
>> 
>> _______________________________________________
>> users mailing list
>> [email protected] <mailto:[email protected]>
>> https://lists.redsleeve.org/mailman/listinfo/users 
>> <https://lists.redsleeve.org/mailman/listinfo/users>
>> 
> 
> _______________________________________________
> users mailing list
> [email protected] <mailto:[email protected]>
> https://lists.redsleeve.org/mailman/listinfo/users 
> <https://lists.redsleeve.org/mailman/listinfo/users>
_______________________________________________
users mailing list
[email protected]
https://lists.redsleeve.org/mailman/listinfo/users

Reply via email to