Re: Resizing LVM partitions

2024-01-26 Thread Miroslav Skoric

On 1/24/24 11:27 PM, Greg Wooledge wrote:

On Wed, Jan 24, 2024 at 10:43:51PM +0100, Miroslav Skoric wrote:

I do not have root account.


Sure you do.  You might not have a root *password* set.


(I use sudo from my user account.) I think I
already tried rescue mode in the past but was not prompted for root
password.


You can set a root password:

 sudo passwd root

That should allow you to enter single-user mode, or to login directly
as root on a text console, both of which are things that you may need
to do as a system administrator.  Especially if you're trying to
unmount /home.




Of course, sorry for my mixing terms. In fact I have never logged in 
directly as root so I thought the account was disabled or unusable.


In any case, after setting a root password I did this:

1. Log-out as user (in GUI)
2. Ctrl-Alt-F2
3. Log-in as root (in CLI)
4. # lvreduce --size -50G --resizefs /dev/mapper/localhost-home
Do you want to unmount "/home" ? [Y|n] y
...
...
Size of logical volume localhost/home changed from 261.00 GiB (66816 
extents) to 211.00 GiB (54016 extents).

Logical volume localhost/home successfully resized.

... after reboot ...

# df -h
Filesystem  Size  Used Avail Use% Mounted on
udev1.5G 0  1.5G   0% /dev
tmpfs   297M  8.9M  288M   3% /run
/dev/mapper/localhost-root  6.2G  4.7G  1.2G  81% /
/dev/mapper/localhost-usr15G   11G  2.7G  80% /usr
tmpfs   1.5G 0  1.5G   0% /dev/shm
tmpfs   5.0M  4.0K  5.0M   1% /run/lock
tmpfs   1.5G 0  1.5G   0% /sys/fs/cgroup
/dev/sda1   228M  142M   74M  66% /boot
/dev/mapper/localhost-home  208G   60G  138G  31% /home
/dev/mapper/localhost-var   3.7G  2.0G  1.6G  57% /var
/dev/mapper/localhost-tmp   2.3G   57K  2.2G   1% /tmp
tmpfs   297M   32K  297M   1% /run/user/1000

# vgdisplay
  --- Volume group ---
  VG Name   localhost
  System ID
  Formatlvm2
  Metadata Areas1
  Metadata Sequence No  21
  VG Access read/write
  VG Status resizable
  MAX LV0
  Cur LV6
  Open LV   6
  Max PV0
  Cur PV1
  Act PV1
  VG Size   <297.85 GiB
  PE Size   4.00 MiB
  Total PE  76249
  Alloc PE / Size   62346 / <243.54 GiB
  Free  PE / Size   13903 / <54.31 GiB
  VG UUID   fbCaw1-u3SN-2HCy-w6y8-v0nK-QsFE-FETNZM

... and then I extended /, /usr, and /var for 1GB each. Seems all ok.

Thank you!




Re: Resizing LVM partitions

2024-01-25 Thread Stefan Monnier
BTW, instead of rescue mode, you can use the initramfs to do such things
(I like to do that when I don't have a LiveUSB at hand because it lets
you manipulate *all* partitions, including /).

I.e. do something like:

- Reboot
- In Grub, edit your boot script (with `e`) to add `break=mount` to the
  kernel command line.
- Use `F10` to boot with that boot script.
- You should very quickly be dropped into a fairly minimal shell,
  without any password.
- None of your volumes are mounted yet.  Even LVM isn't initialized yet.
- Then type something like (guaranteed 100% untested)

 lvm vgchange -ay   # Activate your LVM volumes.
 mount /dev/mapper/localhost-root /mnt  # Mount /
 mount --bind /dev /mnt/dev
 chroot /mnt /bin/bash
 lvreduce --size -50G --resizefs /dev/mapper/localhost-home
 exit
 umount /mnt/dev
 umount /mnt
 exit


--- Stefan



Re: Resizing LVM partitions

2024-01-24 Thread Greg Wooledge
On Wed, Jan 24, 2024 at 10:43:51PM +0100, Miroslav Skoric wrote:
> I do not have root account.

Sure you do.  You might not have a root *password* set.

> (I use sudo from my user account.) I think I
> already tried rescue mode in the past but was not prompted for root
> password.

You can set a root password:

sudo passwd root

That should allow you to enter single-user mode, or to login directly
as root on a text console, both of which are things that you may need
to do as a system administrator.  Especially if you're trying to
unmount /home.



On the deprecation of separate /usr (Was: Re: Resizing LVM partitions)

2024-01-24 Thread Andy Smith
Hello,

On Wed, Jan 24, 2024 at 09:20:47AM +0700, Max Nikulin wrote:
> Notice that separate /usr is not supported by latest systemd that should be
> a part of the next Debian release.

I don't think this is the case. What I think is not supported is a
separate /usr that is not mounted by initramfs. On Debian, if you do
nothing special, any separate /usr will be mounted by initramfs. As
far as I'm aware it is only a concern for:

people who have a /usr mount point
&& (
(do not use an initramfs)
||
(have meddled with their initramfs to stop it from mounting
/usr)
)

What systemd has decided to no longer support is what they call
"split /usr":

https://lists.freedesktop.org/archives/systemd-devel/2022-April/047673.html

They define that as "/usr that is not populated at boot time". i.e.
a /usr that would be mounted during boot from /etc/fstab or similar.
If /usr is mounted by the initramfs, that is before userland boot,
and systemd doesn't care about that. Debian does that where there is
a separate mount point for /usr.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Resizing LVM partitions

2024-01-24 Thread Miroslav Skoric

On 1/24/24 3:20 AM, Max Nikulin wrote:

On 24/01/2024 06:29, Miroslav Skoric wrote:
 # df -h 



/dev/mapper/localhost-root  6.2G  4.7G  1.2G  81% /


Taking into account size of kernel packages, I would allocate a few G 
more for the root partition.


dpkg -s linux-image-6.1.0-17-amd64 | grep -i size
Installed-Size: 398452

Notice that separate /usr is not supported by latest systemd that should 
be a part of the next Debian release.





Thank you. Will consider that.



Re: Resizing LVM partitions

2024-01-24 Thread Miroslav Skoric

On 1/24/24 12:42 AM, Greg Wooledge wrote:


You'll have to unmount it, which generally means you will have to reboot
in single-user mode, or from rescue media, whichever is easier.

If you aren't opposed to setting a root password (some people have *weird*
self-imposed restrictions, seriously), single-user mode (aka "rescue mode"
from the GRUB menu) is the standard way to do this.  Boot to the GRUB menu,
select rescue mode, give the root password when prompted, then you should
end up with a root shell prompt.  I don't recall whether /home will be
mounted at that point; if it is, unmount it.  Then you should be able
to do whatever resizing is needed.  When done, exit from the shell, and
the system should boot normally.




I do not have root account. (I use sudo from my user account.) I think I 
already tried rescue mode in the past but was not prompted for root 
password.




Re: Resizing LVM partitions

2024-01-24 Thread Andy Smith
Hi,

On Wed, Jan 24, 2024 at 12:29:18AM +0100, Miroslav Skoric wrote:
> Dunno ... in any case, for some reason the rescue mode I went to by booting
> from an old installation CD (dated back to Debian 6.0.1A Squeeze!) did not
> see partitions in form of e.g. /dev/mapper/localhost-home, but rather
> /dev/localhost/home, so lvreduce rejected to proceed.

Booting into an ancient userland like Debian 6 to do vital work on
your storage stack is completely insane. Bear in mind the amount of
changes and bug fixes that will have taken place in kernel,
filesystem and LVM tools between Debian 6 and Debian 12. You are
lucky we are not now having a very different kind of conversation.

Always try to use a rescue/live environment that is close to, or
newer than your actual system. Anything else risks catastrophe.

> So I tried vgdisplay. It returned ... among the others ...
> 
> ...
> Total PE  76249
> Alloc PE / Size   74378 / 290.54 GiB
> Free  PE / Size   1871 / 7.31 GiB

Summary: you managed to use some of that available space.

> In any case, what is left to do is to find the best way to take some space
> from /home which is largely underused.

You should be able to do this bit without going into a live/rescue
env. You won't be able to do it while any user is logged in, so shut
down any desktop environment and log out of all users. Log back in
as root from console and just do the lvreduce --resizefs from there.
It should ask if you are willing to unmount /home.

If there's anything left running from /home the unmount won't work
and you'll have to track down those stray processes, but should be
easily doable.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Resizing LVM partitions

2024-01-24 Thread Greg Wooledge
On Wed, Jan 24, 2024 at 06:45:12AM +0100, to...@tuxteam.de wrote:
> On Tue, Jan 23, 2024 at 06:42:43PM -0500, Greg Wooledge wrote:
> > You'll have to unmount it, which generally means you will have to reboot
> > in single-user mode, or from rescue media, whichever is easier.
> 
> If you log in as root in a Linux console before the graphical
> thing gets started, you might get a stab at it, too. No reason
> for /home to be in use if no user has a session running

Depends on the system.  If you've got user crontabs that run @reboot
(or their systemd equivalents, if such a thing exists), those might
try to use files in $HOME.  If you're running a mail transfer agent
that receives email, it might attempt deliveries, which would involve
looking for ~/.forward or similar files, and deliveries could be done
to the home directory (but not by default on Debian).

But yeah, for *most* users, what you said is probably accurate.



Re: Resizing LVM partitions

2024-01-23 Thread tomas
On Tue, Jan 23, 2024 at 06:42:43PM -0500, Greg Wooledge wrote:
> On Wed, Jan 24, 2024 at 12:29:18AM +0100, Miroslav Skoric wrote:
> >   Total PE  76249
> >   Alloc PE / Size   75146 / <293.54 GiB
> >   Free  PE / Size   1103 / <4.31 GiB
> >   VG UUID   fbCaw1-u3SN-2HCy-w6y8-v0nK-QsFE-FETNZM
> > 
> > ... seems that I still have some 4 GB of unallocated space to add somewhere
> > if/when needed.
> 
> Yes.  Everything looks fine.
> 
> > In any case, what is left to do is to find the best way to take some space
> > from /home which is largely underused.
> 
> You'll have to unmount it, which generally means you will have to reboot
> in single-user mode, or from rescue media, whichever is easier.

If you log in as root in a Linux console before the graphical
thing gets started, you might get a stab at it, too. No reason
for /home to be in use if no user has a session running (I can
only vouch for a pretty minimal graphical system with no DE,
but it might work for the newfangled things, too).

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: Resizing LVM partitions

2024-01-23 Thread Max Nikulin

On 24/01/2024 06:29, Miroslav Skoric wrote:
 # df -h 



/dev/mapper/localhost-root  6.2G  4.7G  1.2G  81% /


Taking into account size of kernel packages, I would allocate a few G 
more for the root partition.


dpkg -s linux-image-6.1.0-17-amd64 | grep -i size
Installed-Size: 398452

Notice that separate /usr is not supported by latest systemd that should 
be a part of the next Debian release.




Re: Resizing LVM partitions

2024-01-23 Thread Greg Wooledge
On Wed, Jan 24, 2024 at 12:29:18AM +0100, Miroslav Skoric wrote:
>   Total PE  76249
>   Alloc PE / Size   75146 / <293.54 GiB
>   Free  PE / Size   1103 / <4.31 GiB
>   VG UUID   fbCaw1-u3SN-2HCy-w6y8-v0nK-QsFE-FETNZM
> 
> ... seems that I still have some 4 GB of unallocated space to add somewhere
> if/when needed.

Yes.  Everything looks fine.

> In any case, what is left to do is to find the best way to take some space
> from /home which is largely underused.

You'll have to unmount it, which generally means you will have to reboot
in single-user mode, or from rescue media, whichever is easier.

If you aren't opposed to setting a root password (some people have *weird*
self-imposed restrictions, seriously), single-user mode (aka "rescue mode"
from the GRUB menu) is the standard way to do this.  Boot to the GRUB menu,
select rescue mode, give the root password when prompted, then you should
end up with a root shell prompt.  I don't recall whether /home will be
mounted at that point; if it is, unmount it.  Then you should be able
to do whatever resizing is needed.  When done, exit from the shell, and
the system should boot normally.



Re: Resizing LVM partitions

2024-01-23 Thread Miroslav Skoric

On 1/23/24 7:36 AM, Andy Smith wrote:


ext filesystems do need to be unmounted when shrinking them (they can
grow online, though). When you use the --resizefs (-r) option, LVM asks
you if you wish to unmount. Obviously you cannot do that on a
fiulesystme which is in use, which means you'll need a live or rescue
environment to do it for the root filesystem.

I'd shrink what else I could and then see where I am at. It's okay to do
them one at a time. LVM will just not do it if there's a problem.
Another thing I sometimes do in these situations is make a new LV and
move some of the things in / out into it where possible, to free up some
more space on /.



Dunno ... in any case, for some reason the rescue mode I went to by 
booting from an old installation CD (dated back to Debian 6.0.1A 
Squeeze!) did not see partitions in form of e.g. 
/dev/mapper/localhost-home, but rather /dev/localhost/home, so lvreduce 
rejected to proceed.


So I tried vgdisplay. It returned ... among the others ...

...
Total PE  76249
Alloc PE / Size   74378 / 290.54 GiB
Free  PE / Size   1871 / 7.31 GiB

... so I considered that 7.31 GB could be used for extending /, /usr, 
and /var file systems. I rebooted machine into normal operation and did 
the following:


 # vgdisplay

  --- Volume group ---
  VG Name   localhost
  System ID
  Formatlvm2
  Metadata Areas1
  Metadata Sequence No  17
  VG Access read/write
  VG Status resizable
  MAX LV0
  Cur LV6
  Open LV   6
  Max PV0
  Cur PV1
  Act PV1
  VG Size   <297.85 GiB
  PE Size   4.00 MiB
  Total PE  76249
  Alloc PE / Size   74378 / <290.54 GiB
  Free  PE / Size   1871 / <7.31 GiB
  VG UUID   fbCaw1-u3SN-2HCy-w6y8-v0nK-QsFE-FETNZM

 # df -h

Filesystem  Size  Used Avail Use% Mounted on
udev1.5G 0  1.5G   0% /dev
tmpfs   297M  8.8M  288M   3% /run
/dev/mapper/localhost-root  5.2G  4.7G  211M  96% /
/dev/mapper/localhost-usr14G   12G  948M  93% /usr
tmpfs   1.5G 0  1.5G   0% /dev/shm
tmpfs   5.0M  4.0K  5.0M   1% /run/lock
tmpfs   1.5G 0  1.5G   0% /sys/fs/cgroup
/dev/sda1   228M  133M   84M  62% /boot
/dev/mapper/localhost-tmp   2.3G   55K  2.2G   1% /tmp
/dev/mapper/localhost-var   2.7G  1.9G  659M  75% /var
/dev/mapper/localhost-home  257G   63G  182G  26% /home
tmpfs   297M   32K  297M   1% /run/user/1000

 # lvextend --size +1G --resizefs /dev/mapper/localhost-root
  Size of logical volume localhost/root changed from 5.32 GiB (1363 
extents) to 6.32 GiB (1619 extents).

  Logical volume localhost/root successfully resized.
resize2fs 1.44.5 (15-Dec-2018)
Filesystem at /dev/mapper/localhost-root is mounted on /; on-line 
resizing required

old_desc_blocks = 22, new_desc_blocks = 26
The filesystem on /dev/mapper/localhost-root is now 6631424 (1k) blocks 
long.


 # df -h (to check the new status)

Filesystem  Size  Used Avail Use% Mounted on
udev1.5G 0  1.5G   0% /dev
tmpfs   297M  8.8M  288M   3% /run
/dev/mapper/localhost-root  6.2G  4.7G  1.2G  81% /
/dev/mapper/localhost-usr14G   12G  948M  93% /usr
tmpfs   1.5G 0  1.5G   0% /dev/shm
tmpfs   5.0M  4.0K  5.0M   1% /run/lock
tmpfs   1.5G 0  1.5G   0% /sys/fs/cgroup
/dev/sda1   228M  133M   84M  62% /boot
/dev/mapper/localhost-tmp   2.3G   55K  2.2G   1% /tmp
/dev/mapper/localhost-var   2.7G  1.9G  659M  75% /var
/dev/mapper/localhost-home  257G   63G  182G  26% /home
tmpfs   297M   32K  297M   1% /run/user/1000

 # lvextend --size +1G --resizefs /dev/mapper/localhost-usr
  Size of logical volume localhost/usr changed from <13.38 GiB (3425 
extents) to <14.38 GiB (3681 extents).

  Logical volume localhost/usr successfully resized.
resize2fs 1.44.5 (15-Dec-2018)
Filesystem at /dev/mapper/localhost-usr is mounted on /usr; on-line 
resizing required

old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/mapper/localhost-usr is now 3769344 (4k) blocks long.

 # df -h
Filesystem  Size  Used Avail Use% Mounted on
udev1.5G 0  1.5G   0% /dev
tmpfs   297M  8.8M  288M   3% /run
/dev/mapper/localhost-root  6.2G  4.7G  1.2G  81% /
/dev/mapper/localhost-usr15G   12G  1.9G  86% /usr
tmpfs   1.5G 0  1.5G   0% /dev/shm
tmpfs   5.0M  4.0K  5.0M   1% /run/lock
tmpfs   1.5G 0  1.5G   0% /sys/fs/cgroup
/dev/sda1   228M  133M   84M  62% /boot
/dev/mapper/localhost-tmp   2.3G   55K  2.2G   1% /tmp
/dev/mappe

Re: Resizing LVM partitions

2024-01-23 Thread Miroslav Skoric

On 1/22/24 11:21 PM, Greg Wooledge wrote:

On Mon, Jan 22, 2024 at 10:41:57PM +0100, Miroslav Skoric wrote:

As I need to extend & resize more than one LV in the file system (/, /usr,
and /var), should they all need to be unmounted before the operation? As I
remember, it is ext3 system on that comp.


What??  I don't think these words mean what you think they mean.

An LV is a logical volume, which is like a virtual partition.  It's a
block device, like /dev/sda2.  You can use an LV the same way you would
use a partition -- you can use it for swap space, or a file system, or
other purposes.

A file system is a mountable directory structure that you can put inside
a partition, or an LV.  File system types include ext4, ext3, xfs, vfat,
and so on.



Sorry for my ignorance regarding terminology, I mix terms sometimes :-)


If your system has separately mounted file systems for /, /usr and
/var and you want to shrink ALL of them, then yes, you would need to
unmount all three of them, shrink them, then (re)boot.  You can't
unmount / during normal operations, so the only ways to shrink / would
involved booting in a special way, either with some external medium,
or with specific kernel parameters.  Thus, you'd typically reboot to
get back to normal operations afterward.



Let me clarify: I did not plan to shrink all of those, but rather just 
one (/home). The other three (/, /usr, and /var) shall be extended from 
the released space.


I managed to locate the first CD of my very old initial installation set 
(squeeze). However, booting from that one did not help me to get /home 
available for shrinking. See later what I did instead.



However, if you're in a position where you think you need to make
dramatic changes to FOUR of your mounted file systems, perhaps you
might want to consider restarting from scratch.  Ponder why you have
separate file systems at all.  Are they really giving you a benefit?
Have you ever filled up one of them and thought "Oh wow, I am *so*
glad I separated these file systems so I didn't fill up ___ as well!"
Or are they just giving you grief with no benefits?




Well I belong to those who are going to exercise any possible way to 
prolong the life of an existing installation, no matter how old it is. 
In my case it started from squeeze a decade or more ago and gradually 
upgraded during the years. And I knew that some years ago I resized the 
file system because of similar reasons, and that worked at the time. But 
the procedure disappeared from memory :-)


Reinstalling from scratch is always possible, of course.



Re: Resizing LVM partitions

2024-01-23 Thread Miroslav Skoric

On 1/22/24 7:01 PM, to...@tuxteam.de wrote:


Ah, forgot to say: "pvdisplay -m" will give you a "physical" map of
your physical volume. So you get an idea what is where and where
you find gaps.




"pvdisplay -m" provided some idea that there was some free space but (if 
I am not wrong) not how much in MB, GB, or else.


I found gvdisplay more precise in that direction.



Re: Resizing LVM partitions

2024-01-23 Thread Miroslav Skoric

On 1/22/24 5:02 PM, Greg Wooledge wrote:

On Mon, Jan 22, 2024 at 03:17:36PM +, Alain D D Williams wrote:

The shrinking of /home is the hard part. You MUST first unmount /home, then
resize the file system, then resize the logical volume.


Before doing any of that, one should check the volume group and see
if there are unallocated hunks of free space that can simply be assigned
to the root LV.



vgdisplay

?

It helped me for now, see my other responses to the topic ...




Re: Resizing LVM partitions

2024-01-22 Thread Andy Smith
Hi,

On Mon, Jan 22, 2024 at 10:59:55PM +0100, Miroslav Skoric wrote:
> On 1/22/24 6:59 PM, to...@tuxteam.de wrote:
> > On Mon, Jan 22, 2024 at 03:40:06PM +, Alain D D Williams wrote:
> > > On Mon, Jan 22, 2024 at 10:29:55AM -0500, Stefan Monnier wrote:
> > > > lvreduce --size -50G --resizefs /dev/mapper/localhost-home
> > > 
> > > Oh, even better. It is a long time since I looked at than man page.
> > > 
> > > Does this still need to be done with the file system unmounted or can it 
> > > be
> > > done with an active file system these days ?
> > 
> > You have first to shrink the file system (if it's ext4, you can use
> > resize2fs: note that you can only *grow* an ext4 which is mounted
> > (called "online resizing) -- to *shrink* it, it has to be unmounted.
> > 
> 
> I will check it again but I think that file systems in that LVM are ext3. So
> it requires all of them to be unmounted prior to resizing ?

ext filesystems do need to be unmounted when shrinking them (they can
grow online, though). When you use the --resizefs (-r) option, LVM asks
you if you wish to unmount. Obviously you cannot do that on a
fiulesystme which is in use, which means you'll need a live or rescue
environment to do it for the root filesystem.

I'd shrink what else I could and then see where I am at. It's okay to do
them one at a time. LVM will just not do it if there's a problem.
Another thing I sometimes do in these situations is make a new LV and
move some of the things in / out into it where possible, to free up some
more space on /.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Resizing LVM partitions

2024-01-22 Thread tomas
On Mon, Jan 22, 2024 at 10:59:55PM +0100, Miroslav Skoric wrote:

[...]

> That last resize2fs (without params) would not work here, or at least it
> would not work for my three file systems that need to be extended: / , /usr
> , and /var . Maybe to extend each of them separately like this:
> 
> lvextend --size +1G --resizefs /dev/mapper/localhost-root
> lvextend --size +1G --resizefs /dev/mapper/localhost-usr
> lvextend --size +1G --resizefs /dev/mapper/localhost-var

Ah, I didn't know of lvextend's --resizefs option. It seems lvreduce
has same. Their man pages refer to fsadm for that which is short in
details.

Still, yes, you have to unmount ext2/ext3/ext4 to reduce their sizes
(you can "grow" them while mounted).

Lvadm has an option to do that for you, no idea whether lvextend
or lvreduce can pass it to lvadm via the --resizefs option.

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: Resizing LVM partitions

2024-01-22 Thread Greg Wooledge
On Mon, Jan 22, 2024 at 10:41:57PM +0100, Miroslav Skoric wrote:
> As I need to extend & resize more than one LV in the file system (/, /usr,
> and /var), should they all need to be unmounted before the operation? As I
> remember, it is ext3 system on that comp.

What??  I don't think these words mean what you think they mean.

An LV is a logical volume, which is like a virtual partition.  It's a
block device, like /dev/sda2.  You can use an LV the same way you would
use a partition -- you can use it for swap space, or a file system, or
other purposes.

A file system is a mountable directory structure that you can put inside
a partition, or an LV.  File system types include ext4, ext3, xfs, vfat,
and so on.

If your system has separately mounted file systems for /, /usr and
/var and you want to shrink ALL of them, then yes, you would need to
unmount all three of them, shrink them, then (re)boot.  You can't
unmount / during normal operations, so the only ways to shrink / would
involved booting in a special way, either with some external medium,
or with specific kernel parameters.  Thus, you'd typically reboot to
get back to normal operations afterward.

However, if you're in a position where you think you need to make
dramatic changes to FOUR of your mounted file systems, perhaps you
might want to consider restarting from scratch.  Ponder why you have
separate file systems at all.  Are they really giving you a benefit?
Have you ever filled up one of them and thought "Oh wow, I am *so*
glad I separated these file systems so I didn't fill up ___ as well!"
Or are they just giving you grief with no benefits?



Re: Resizing LVM partitions

2024-01-22 Thread Miroslav Skoric

On 1/22/24 6:59 PM, to...@tuxteam.de wrote:

On Mon, Jan 22, 2024 at 03:40:06PM +, Alain D D Williams wrote:

On Mon, Jan 22, 2024 at 10:29:55AM -0500, Stefan Monnier wrote:

lvextend --size +1G --resizefs /dev/mapper/localhost-home

Ie get lvextend to do the maths & work it out for me.

Those who are cleverer than me might be able to tell you how to get it right
first time!


lvreduce --size -50G --resizefs /dev/mapper/localhost-home


Oh, even better. It is a long time since I looked at than man page.

Does this still need to be done with the file system unmounted or can it be
done with an active file system these days ?


You have first to shrink the file system (if it's ext4, you can use
resize2fs: note that you can only *grow* an ext4 which is mounted
(called "online resizing) -- to *shrink* it, it has to be unmounted.



I will check it again but I think that file systems in that LVM are 
ext3. So it requires all of them to be unmounted prior to resizing ?



Since I wasn't quite sure whether ext2's Gs are the same as LVM's
and didn't want to bother with whatever clippings each process
takes, what I did in this situation was:

  - shrink (resize2fs) the file system to a size clearly below target
  - resize the LVM to my target size
  - resize2fs again without params, which lets it take whatever the
partition offers



That last resize2fs (without params) would not work here, or at least it 
would not work for my three file systems that need to be extended: / , 
/usr , and /var . Maybe to extend each of them separately like this:


lvextend --size +1G --resizefs /dev/mapper/localhost-root
lvextend --size +1G --resizefs /dev/mapper/localhost-usr
lvextend --size +1G --resizefs /dev/mapper/localhost-var

?



Re: Resizing LVM partitions

2024-01-22 Thread Miroslav Skoric

On 1/22/24 4:40 PM, Alain D D Williams wrote:

On Mon, Jan 22, 2024 at 10:29:55AM -0500, Stefan Monnier wrote:

lvextend --size +1G --resizefs /dev/mapper/localhost-home

Ie get lvextend to do the maths & work it out for me.

Those who are cleverer than me might be able to tell you how to get it right
first time!


lvreduce --size -50G --resizefs /dev/mapper/localhost-home


Oh, even better. It is a long time since I looked at than man page.

Does this still need to be done with the file system unmounted or can it be
done with an active file system these days ?



As I need to extend & resize more than one LV in the file system (/, 
/usr, and /var), should they all need to be unmounted before the 
operation? As I remember, it is ext3 system on that comp.




Re: Resizing LVM partitions

2024-01-22 Thread Miroslav Skoric

On 1/22/24 4:17 PM, Alain D D Williams wrote:

On Mon, Jan 22, 2024 at 03:32:30PM +0100, sko...@uns.ac.rs wrote:

I am getting the following message at any boot:

"The volume "Filesystem root" has only 221.1 MB disk space remaining."

  df -h says:

Filesystem  Size  Used Avail Use% Mounted on
udev1.5G 0  1.5G   0% /dev
tmpfs   297M  9.0M  288M   4% /run
/dev/mapper/localhost-root  5.2G  4.7G  211M  96% /
/dev/mapper/localhost-usr14G   12G  948M  93% /usr
tmpfs   1.5G 0  1.5G   0% /dev/shm
tmpfs   5.0M  4.0K  5.0M   1% /run/lock
tmpfs   1.5G 0  1.5G   0% /sys/fs/cgroup
/dev/sda1   228M  133M   84M  62% /boot
/dev/mapper/localhost-tmp   2.3G   57K  2.2G   1% /tmp
/dev/mapper/localhost-var   2.7G  2.5G   55M  98% /var
/dev/mapper/localhost-home  257G   73G  172G  30% /home
tmpfs   297M   40K  297M   1% /run/user/1000

As my system has encrypted LVM, I suppose that I shall reduce some space
used for /home, and then use it to extend /, /usr, and /var logical
partitions. I think I did (or tried to do) something similar several years
ago, but forgot the proper procedure. Any link for a good tutorial is
welcomed. Thanks.


The shrinking of /home is the hard part. You MUST first unmount /home, then
resize the file system, then resize the logical volume.

umount /home

Find out how big it is:
resize2fs /dev/mapper/localhost-home

Change the filesystem size:
resize2fs /dev/mapper/localhost-home NEW-SIZE

Change the partition size:
lvextend --size 200G /dev/mapper/localhost-home

The hard bit is working out what NEW-SIZE should be and having it such
that you use all of the partition but without making the file system size
greater than the partition size - ie getting the last few megabytes right.

What I do is make NEW-SIZE 2GB smaller than I want (assuming that it still 
fits),
the size I give to lvextend 1GB smaller - so it all works, but there is wasted
space & it is not quite big enough. I then do:

lvextend --size +1G --resizefs /dev/mapper/localhost-home

Ie get lvextend to do the maths & work it out for me.

Those who are cleverer than me might be able to tell you how to get it right
first time!

mount /home

Extending the others is easy and can be done when the system is running &
active, something like:

lvextend --size +1G --resizefs /dev/mapper/localhost-var

Finally: ensure that you have a good backup of /home before you start.



Sounds interesting. Thank you. Will see other opinions too.



Re: Resizing LVM partitions

2024-01-22 Thread Greg Wooledge
On Mon, Jan 22, 2024 at 07:01:13PM +0100, to...@tuxteam.de wrote:
> On Mon, Jan 22, 2024 at 11:02:06AM -0500, Greg Wooledge wrote:
> > On Mon, Jan 22, 2024 at 03:17:36PM +, Alain D D Williams wrote:
> > > The shrinking of /home is the hard part. You MUST first unmount /home, 
> > > then
> > > resize the file system, then resize the logical volume.
> > 
> > Before doing any of that, one should check the volume group and see
> > if there are unallocated hunks of free space that can simply be assigned
> > to the root LV.
> 
> Ah, forgot to say: "pvdisplay -m" will give you a "physical" map of
> your physical volume. So you get an idea what is where and where
> you find gaps.

A volume group (VG) may be comprised of one or more physical volumes
(PV), and the free space would be counted at the VG level.  So I'd suggest
"vgdisplay" instead.  This tells you how many "PE" (physical extents,
aka hunks of space) are allocated, and how many are free.



Re: Resizing LVM partitions

2024-01-22 Thread Greg Wooledge
On Mon, Jan 22, 2024 at 01:06:16PM -0500, Gremlin wrote:
> I use to use LVM and RAID but I quit using that after finding out that
> partition the drive and using gparted was way more easier

If you allocate all the space during installation and don't leave any
to make adjustments, or to make snapshots, then you're not getting
any of the benefits of LVM.  In this case, you're just doing static
partitioning with extra complexity, and your conclusion would be correct.

The key to LVM is to leave some space unallocated.  Then you get *options*.



Re: Resizing LVM partitions

2024-01-22 Thread Gremlin

On 1/22/24 10:17, Alain D D Williams wrote:

On Mon, Jan 22, 2024 at 03:32:30PM +0100, sko...@uns.ac.rs wrote:

I am getting the following message at any boot:

"The volume "Filesystem root" has only 221.1 MB disk space remaining."

  df -h says:

Filesystem  Size  Used Avail Use% Mounted on
udev1.5G 0  1.5G   0% /dev
tmpfs   297M  9.0M  288M   4% /run
/dev/mapper/localhost-root  5.2G  4.7G  211M  96% /
/dev/mapper/localhost-usr14G   12G  948M  93% /usr
tmpfs   1.5G 0  1.5G   0% /dev/shm
tmpfs   5.0M  4.0K  5.0M   1% /run/lock
tmpfs   1.5G 0  1.5G   0% /sys/fs/cgroup
/dev/sda1   228M  133M   84M  62% /boot
/dev/mapper/localhost-tmp   2.3G   57K  2.2G   1% /tmp
/dev/mapper/localhost-var   2.7G  2.5G   55M  98% /var
/dev/mapper/localhost-home  257G   73G  172G  30% /home
tmpfs   297M   40K  297M   1% /run/user/1000

As my system has encrypted LVM, I suppose that I shall reduce some space
used for /home, and then use it to extend /, /usr, and /var logical
partitions. I think I did (or tried to do) something similar several years
ago, but forgot the proper procedure. Any link for a good tutorial is
welcomed. Thanks.


The shrinking of /home is the hard part. You MUST first unmount /home, then
resize the file system, then resize the logical volume.

umount /home

Find out how big it is:
resize2fs /dev/mapper/localhost-home

Change the filesystem size:
resize2fs /dev/mapper/localhost-home NEW-SIZE

Change the partition size:
lvextend --size 200G /dev/mapper/localhost-home

The hard bit is working out what NEW-SIZE should be and having it such
that you use all of the partition but without making the file system size
greater than the partition size - ie getting the last few megabytes right.

What I do is make NEW-SIZE 2GB smaller than I want (assuming that it still 
fits),
the size I give to lvextend 1GB smaller - so it all works, but there is wasted
space & it is not quite big enough. I then do:

lvextend --size +1G --resizefs /dev/mapper/localhost-home

Ie get lvextend to do the maths & work it out for me.

Those who are cleverer than me might be able to tell you how to get it right
first time!

mount /home

Extending the others is easy and can be done when the system is running &
active, something like:

lvextend --size +1G --resizefs /dev/mapper/localhost-var

Finally: ensure that you have a good backup of /home before you start.



I use to use LVM and RAID but I quit using that after finding out that 
partition the drive and using gparted was way more easier





Re: Resizing LVM partitions

2024-01-22 Thread tomas
On Mon, Jan 22, 2024 at 11:02:06AM -0500, Greg Wooledge wrote:
> On Mon, Jan 22, 2024 at 03:17:36PM +, Alain D D Williams wrote:
> > The shrinking of /home is the hard part. You MUST first unmount /home, then
> > resize the file system, then resize the logical volume.
> 
> Before doing any of that, one should check the volume group and see
> if there are unallocated hunks of free space that can simply be assigned
> to the root LV.

Ah, forgot to say: "pvdisplay -m" will give you a "physical" map of
your physical volume. So you get an idea what is where and where
you find gaps.

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: Resizing LVM partitions

2024-01-22 Thread tomas
On Mon, Jan 22, 2024 at 03:40:06PM +, Alain D D Williams wrote:
> On Mon, Jan 22, 2024 at 10:29:55AM -0500, Stefan Monnier wrote:
> > > lvextend --size +1G --resizefs /dev/mapper/localhost-home
> > >
> > > Ie get lvextend to do the maths & work it out for me.
> > >
> > > Those who are cleverer than me might be able to tell you how to get it 
> > > right
> > > first time!
> > 
> > lvreduce --size -50G --resizefs /dev/mapper/localhost-home
> 
> Oh, even better. It is a long time since I looked at than man page.
> 
> Does this still need to be done with the file system unmounted or can it be
> done with an active file system these days ?

You have first to shrink the file system (if it's ext4, you can use
resize2fs: note that you can only *grow* an ext4 which is mounted
(called "online resizing) -- to *shrink* it, it has to be unmounted.

Since I wasn't quite sure whether ext2's Gs are the same as LVM's
and didn't want to bother with whatever clippings each process
takes, what I did in this situation was:

 - shrink (resize2fs) the file system to a size clearly below target
 - resize the LVM to my target size
 - resize2fs again without params, which lets it take whatever the
   partition offers

Sounds complicated, but is not :-)

You can shrink the partition to be smaller than the file system,
but then you'll thrash it sooner or later, when two file sysems
start quibbling over blocks on the fence like angry neighbours :)

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: Resizing LVM partitions

2024-01-22 Thread Greg Wooledge
On Mon, Jan 22, 2024 at 03:17:36PM +, Alain D D Williams wrote:
> The shrinking of /home is the hard part. You MUST first unmount /home, then
> resize the file system, then resize the logical volume.

Before doing any of that, one should check the volume group and see
if there are unallocated hunks of free space that can simply be assigned
to the root LV.

One of the fundamental *reasons* to use LVM is to leave a bunch of space
unallocated, and assign it to whatever needs it later, once the storage
needs become known.  Leaving some unallocated space also allows the
use of snapshots, which are nice when doing backups.

I heard someone say, once, that the Debian installer will assign all of
the space in a VG during installation, if you follow its "guided" path.
This is a tragedy, if it's still true.



Re: Resizing LVM partitions

2024-01-22 Thread Alain D D Williams
On Mon, Jan 22, 2024 at 10:29:55AM -0500, Stefan Monnier wrote:
> > lvextend --size +1G --resizefs /dev/mapper/localhost-home
> >
> > Ie get lvextend to do the maths & work it out for me.
> >
> > Those who are cleverer than me might be able to tell you how to get it right
> > first time!
> 
> lvreduce --size -50G --resizefs /dev/mapper/localhost-home

Oh, even better. It is a long time since I looked at than man page.

Does this still need to be done with the file system unmounted or can it be
done with an active file system these days ?

-- 
Alain Williams
Linux/GNU Consultant - Mail systems, Web sites, Networking, Programmer, IT 
Lecturer.
+44 (0) 787 668 0256  https://www.phcomp.co.uk/
Parliament Hill Computers. Registration Information: 
https://www.phcomp.co.uk/Contact.html
#include 



Re: Resizing LVM partitions

2024-01-22 Thread Stefan Monnier
> lvextend --size +1G --resizefs /dev/mapper/localhost-home
>
> Ie get lvextend to do the maths & work it out for me.
>
> Those who are cleverer than me might be able to tell you how to get it right
> first time!

lvreduce --size -50G --resizefs /dev/mapper/localhost-home

?


Stefan



Re: Resizing LVM partitions

2024-01-22 Thread Alain D D Williams
On Mon, Jan 22, 2024 at 03:32:30PM +0100, sko...@uns.ac.rs wrote:
> I am getting the following message at any boot:
> 
> "The volume "Filesystem root" has only 221.1 MB disk space remaining."
> 
>  df -h says:
> 
> Filesystem  Size  Used Avail Use% Mounted on
> udev1.5G 0  1.5G   0% /dev
> tmpfs   297M  9.0M  288M   4% /run
> /dev/mapper/localhost-root  5.2G  4.7G  211M  96% /
> /dev/mapper/localhost-usr14G   12G  948M  93% /usr
> tmpfs   1.5G 0  1.5G   0% /dev/shm
> tmpfs   5.0M  4.0K  5.0M   1% /run/lock
> tmpfs   1.5G 0  1.5G   0% /sys/fs/cgroup
> /dev/sda1   228M  133M   84M  62% /boot
> /dev/mapper/localhost-tmp   2.3G   57K  2.2G   1% /tmp
> /dev/mapper/localhost-var   2.7G  2.5G   55M  98% /var
> /dev/mapper/localhost-home  257G   73G  172G  30% /home
> tmpfs   297M   40K  297M   1% /run/user/1000
> 
> As my system has encrypted LVM, I suppose that I shall reduce some space
> used for /home, and then use it to extend /, /usr, and /var logical
> partitions. I think I did (or tried to do) something similar several years
> ago, but forgot the proper procedure. Any link for a good tutorial is
> welcomed. Thanks.

The shrinking of /home is the hard part. You MUST first unmount /home, then
resize the file system, then resize the logical volume.

umount /home

Find out how big it is:
resize2fs /dev/mapper/localhost-home

Change the filesystem size:
resize2fs /dev/mapper/localhost-home NEW-SIZE

Change the partition size:
lvextend --size 200G /dev/mapper/localhost-home

The hard bit is working out what NEW-SIZE should be and having it such
that you use all of the partition but without making the file system size
greater than the partition size - ie getting the last few megabytes right.

What I do is make NEW-SIZE 2GB smaller than I want (assuming that it still 
fits),
the size I give to lvextend 1GB smaller - so it all works, but there is wasted
space & it is not quite big enough. I then do:

lvextend --size +1G --resizefs /dev/mapper/localhost-home

Ie get lvextend to do the maths & work it out for me.

Those who are cleverer than me might be able to tell you how to get it right
first time!

mount /home

Extending the others is easy and can be done when the system is running &
active, something like:

lvextend --size +1G --resizefs /dev/mapper/localhost-var

Finally: ensure that you have a good backup of /home before you start.

-- 
Alain Williams
Linux/GNU Consultant - Mail systems, Web sites, Networking, Programmer, IT 
Lecturer.
+44 (0) 787 668 0256  https://www.phcomp.co.uk/
Parliament Hill Computers. Registration Information: 
https://www.phcomp.co.uk/Contact.html
#include 



Resizing LVM partitions

2024-01-22 Thread skoric
I am getting the following message at any boot:

"The volume "Filesystem root" has only 221.1 MB disk space remaining."

 df -h says:

Filesystem  Size  Used Avail Use% Mounted on
udev1.5G 0  1.5G   0% /dev
tmpfs   297M  9.0M  288M   4% /run
/dev/mapper/localhost-root  5.2G  4.7G  211M  96% /
/dev/mapper/localhost-usr14G   12G  948M  93% /usr
tmpfs   1.5G 0  1.5G   0% /dev/shm
tmpfs   5.0M  4.0K  5.0M   1% /run/lock
tmpfs   1.5G 0  1.5G   0% /sys/fs/cgroup
/dev/sda1   228M  133M   84M  62% /boot
/dev/mapper/localhost-tmp   2.3G   57K  2.2G   1% /tmp
/dev/mapper/localhost-var   2.7G  2.5G   55M  98% /var
/dev/mapper/localhost-home  257G   73G  172G  30% /home
tmpfs   297M   40K  297M   1% /run/user/1000

As my system has encrypted LVM, I suppose that I shall reduce some space
used for /home, and then use it to extend /, /usr, and /var logical
partitions. I think I did (or tried to do) something similar several years
ago, but forgot the proper procedure. Any link for a good tutorial is
welcomed. Thanks.

Misko



Re: resizing PDF files with Ghostscript

2023-09-18 Thread Curt
On 2023-09-17, Greg Marks  wrote:
>
> I am trying to use Ghostscript to resize PDF files to letter page size,
> but on certain files the output is not the correct size.  As an example:
>
>$wget https://gmarks.org/abrams_anh_pardo.pdf
>
>$pdfinfo abrams_anh_pardo.pdf=20
>...
>Page size:  539 x 737 pts
>...
>
>$gs -o resized_file.pdf -sDEVICE=pdfwrite -dFIXEDMEDIA -dPDFFitPage -d
> DEVICEWIDTHPOINTS=612 -dDEVICEHEIGHTPOINTS=792 -dBATCH -dSAFER abrams_a
> nh_pardo.pdf
>
>$pdfinfo resized_file.pdf=20
>...
>Page size:  579.224 x 792 pts
>...
>
 
It appears the output of pdfwrite has a MediaBox of 612x792 but a CropBox of
595.61x792, which must be the result of scaling A4 down to Letter and
centering that scaled down area on the page.

curty@einstein:~$ pdfinfo -box resized_file.pdf 

Page size:  579.224 x 792 pts
Page rot:   0
MediaBox:   0.00 0.00   612.00   792.00
CropBox:   16.39 0.00   595.61   792.00
BleedBox:  16.39 0.00   595.61   792.00
TrimBox:   16.39 0.00   595.61   792.00
ArtBox:16.39 0.00   595.61   792.00
File size:  26538 bytes
Optimized:  no
PDF version:1.7

I'm unaware of another way of "fixing" this but the following kludge (unless
you can tell your printing process to use MediaBox and ignore CropBox).

 sed -e "/CropBox/,/]/s#.# #g" resized_file.pdf 

curty@einstein:~$ pdfinfo -box resized_file.pdf 

Page size:  612 x 792 pts (letter)
Page rot:   0
MediaBox:   0.00 0.00   612.00   792.00
CropBox:0.00 0.00   612.00   792.00
BleedBox:   0.00 0.00   612.00   792.00
TrimBox:0.00 0.00   612.00   792.00
ArtBox: 0.00 0.00   612.00   792.00
File size:  26538 bytes
Optimized:  no
PDF version:1.7




resizing PDF files with Ghostscript

2023-09-16 Thread Greg Marks
I am trying to use Ghostscript to resize PDF files to letter page size,
but on certain files the output is not the correct size.  As an example:

   $wget https://gmarks.org/abrams_anh_pardo.pdf

   $pdfinfo abrams_anh_pardo.pdf 
   ...
   Page size:  539 x 737 pts
   ...

   $gs -o resized_file.pdf -sDEVICE=pdfwrite -dFIXEDMEDIA -dPDFFitPage 
-dDEVICEWIDTHPOINTS=612 -dDEVICEHEIGHTPOINTS=792 -dBATCH -dSAFER 
abrams_anh_pardo.pdf

   $pdfinfo resized_file.pdf 
   ...
   Page size:  579.224 x 792 pts
   ...

Despite the flags -dDEVICEWIDTHPOINTS=612 and -dDEVICEHEIGHTPOINTS=792,
the page size of the output file is 579.224 x 792 pts instead of 612 x
792 pts.  (Interestingly, Ghostscript did change the page size, just
not to the correct dimensions.)  As some printers will refuse to print
files that aren't 612 x 792, I'd like to be able to convert such files
correctly.  How does one do that with Ghostscript?

Incidentally, the command

   $pdfjam --outfile resized_file.pdf --paper letter abrams_anh_pardo.pdf

does produce the correct output; however, on certain PDF files, pdfjam
yields output files with blank pages, so I'd like to be able to use
Ghostscript as an alternative.

Best regards,
Greg Marks


signature.asc
Description: PGP signature


Re: Disabling the automatic resizing

2022-10-24 Thread David
Hi Johnny

I am sending you a copy of this message in CC in case you are not subscribed
to the mailing list. However, can you please ensure that your reply goes to
the mailing list, not just to me. Because this is a community discussion,
not a private one. So I have copied your private reply below here so that the
community does not miss part of the conversation. My reply is at the bottom
of this message.

On Tue, 25 Oct 2022 at 06:36, Johnny de Villiers
 wrote:
>
> Hi David
>
> Thank you for the fast reply... Attempted to do just that but instead of 
> fdisk was using gparted and the system did just what you said... IT HUNG 
> ITSELF;-P
>
> However when I found this to be the case I went ahead and reflashed, booted 
> and then after all was done attempted the resizing... however upon the next 
> reboot the partition was resized again!? Then while describing this to a 
> friend an idea popped up, the script was alway looking for the end of the 
> disk, right?!
>
> The script is looking for the end so why not swap the second and third 
> partitions places? Attempted it and... ... ... what would you know it worked! 
> Not ideal I know but. You say that the resize function does not exist outside 
> of "initrd", what are the chances that you guys would be able to add a 
> function/variable to prevent the resize? In raspbian this is achieved by 
> removing the init function on the cmdline.txt and in ubuntu by adding a
>
> growpart:
>   mode: disabled
>
> to the usrconfig that gets parsed to the cloud-init?! Or should I just have a 
> look into removing the  " /scripts/local-bottom/rpi-resizerootfs " before 
> boot?
>
> Looking forward to hearing back from you!

I am just a user of the image, not a developer. I just shared with
you some experience that I had when using the image.

If you want to have a conversation regarding changes to the image,
you will need to contact the developer/team producing the image.

I don't have any more information about this than I included
in my previous message.



Re: Disabling the automatic resizing

2022-10-24 Thread David
On Tue, 25 Oct 2022 at 00:09, Johnny de Villiers
 wrote:

> Am working on a github repository to give a walkthrough on the setup of
> a device hosting a linux interface, this will extend all the way through
> docker and a web-server, however have yet to figure out  how to disable
> the automatic resizing on a Raspberry Pi using your pre-configured .img

> Have already written the scripts for the resize and docker components for
> Ubuntu and Raspbian, however the resizing of the root partition on the
> debian side... well I can't do it because the automatic resizing
> configures the "/" partition to consume all of the available space, the
> scripts that I have already written disable the resizing on the Raspbian
> and Ubuntu "init" sides as one can easily grow a "live" partition but the
> kernel does not like being made smaller.

> Any insight on how to disable the "grow part" on the debian side will be
> much appreciated!

Hello

I assume you are using one of the images found here[1]?

If that is correct, note this additional information (from [2]):

  Partitioning: The image ships as a 1500MB media, which means you can use
  it in any device of that size or greater. The first 300MB will be set
  aside as a boot partition. The image will resize to use all of the
  available space on the first boot — If you want it not to span the SD’s
  full size, after flashing it to your SD card, you can use fdisk to create
  a limiting dummy partition at the limit you want to set. After the first
  boot, just delete (or use!) that partition; the auto-resizer will not be
  run again.

The script that performs this operation is in the image:
  /scripts/local-bottom/rpi-resizerootfs

When I did this a while ago, I spent some time wrestling with this script
to persuade it to work. I found that this script has some additional
requirements not stated above:

1) There must be no additional partitions except the dummy partition, or
   else the script will hang at boot.

2) The dummy partition must include the last sector of the drive. Because
   if parted (in the script) reports any 'Free Space' at all after the
   dummy partition, the resize script will decide to do nothing.

3) The reason that "the auto-resizer will not be run again" is (if my
   memory is correct) that it does not exist outside of the initrd, which
   subsequently gets rebuilt to not include it. So if you want to read the
   script, you have to look into the initrd before booting the image. If
   you try to look for this script after booting, you will never find it.

[1] https://raspi.debian.net/tested-images/
[2] https://raspi.debian.net/defaults-and-settings/



Disabling the automatic resizing

2022-10-24 Thread Johnny de Villiers
Good day...

Am working on a github repository to give a walkthrough on the setup of a
device hosting a linux interface, this will extend all the way through
docker and a web-server, however have yet to figure out  how to disable the
automatic resizing on a Raspberry Pi using your pre-configured .img

Have already written the scripts for the resize and docker components for
Ubuntu and Raspbian, however the resizing of the root partition on the
debian side... well I can't do it because the automatic resizing configures
the "/" partition to consume all of the available space, the scripts that I
have already written disable the resizing on the Raspbian and Ubuntu "init"
sides as one can easily grow a "live" partition but the kernel does not
like being made smaller.

Any insight on how to disable the "grow part" on the debian side will be
much appreciated!

-- 
Thank you
Kind Regards
Johnny de Villiers


Re: gnu screen and resizing terminal window

2022-01-27 Thread Tim Woodall

On Wed, 26 Jan 2022, Bijan Soleymani wrote:


On 2022-01-26 5:55 p.m., Bijan Soleymani wrote:
Actually apparently putty does support remote resizing. It just seems that 
our systems lack the right termcap entries.


I managed to resize the putty window by running the command:
resize -s height width

so:
resize -s 24 80

Also adding this:
termcapinfo xterm WS=\E[8;%d;%dt

to:
/etc/screenrc

Allows screen to resize the putty session (with the :width and :height 
commands).


But when quitting/restarting screen it puts the putty and the screen session 
back to the original size.




Thanks, that gives me something to investigate. At the moment, adding
that causes the screen to resize to 80x24 when I attach or detatch which
is not what I want but it gives me something to explore.

Tim.



Re: gnu screen and resizing terminal window

2022-01-27 Thread Tim Woodall

On Wed, 26 Jan 2022, Bijan Soleymani wrote:




On 2022-01-26 1:45 p.m., Tim Woodall wrote:

I have to use PuTTY to connect to a debian server. For reasons that are
outwith my control the ssh session disconnects every 24 hrs.

Therefore I run screen so after reconnecting I can recover to whereever
I was at.

However, the PuTTY window does not resize to whatever it was previously.
I can find lots of questions asking how to turn this feature off but
nothing on why it doesn't work for me.


As far as I know this is not a screen feature. Putty controls the window 
size, it is determined by the default or whatever is saved for that session. 
You can change what happens when you resize the putty window on the machine 
running putty. There is no way for you to change the putty screen from the 
debian side.


Screen does provide commands to resize the virtual terminal, however neither 
putty nor xterm seem to support the termcap commands (apparently it is Z0 and 
Z1).


What happens when you try to resize the screen using screen's windowing 
commands:


^a : width 50
^a : height 15

(control-a, then colon, then width 50, then enter)

In my case I get a message that
Your termcap does not specify how to change the terminal's width to 50
and
Your termcap does not specify how to change the terminal's height to 15

That is with TERM set to xterm.



Thanks, that seems to be it.

I also get: (from outside screen)

$ resize 80 50
resize: Can't set window size under VT100 emulation

which I'm sure I already tested and it worked but I must have been doing
something different!



Re: gnu screen and resizing terminal window

2022-01-26 Thread Bijan Soleymani

On 2022-01-26 5:55 p.m., Bijan Soleymani wrote:
Actually apparently putty does support remote resizing. It just seems 
that our systems lack the right termcap entries.


I managed to resize the putty window by running the command:
resize -s height width

so:
resize -s 24 80

Also adding this:
termcapinfo xterm WS=\E[8;%d;%dt

to:
/etc/screenrc

Allows screen to resize the putty session (with the :width and :height 
commands).


But when quitting/restarting screen it puts the putty and the screen 
session back to the original size.


Bijan



Re: gnu screen and resizing terminal window

2022-01-26 Thread Bijan Soleymani

On 2022-01-26 5:42 p.m., Bijan Soleymani wrote:
As far as I know this is not a screen feature. Putty controls the window 
size, it is determined by the default or whatever is saved for that 
session. You can change what happens when you resize the putty window on 
the machine running putty. There is no way for you to change the putty 
screen from the debian side.


Actually apparently putty does support remote resizing. It just seems 
that our systems lack the right termcap entries.


Bijan



Re: gnu screen and resizing terminal window

2022-01-26 Thread Bijan Soleymani




On 2022-01-26 1:45 p.m., Tim Woodall wrote:

I have to use PuTTY to connect to a debian server. For reasons that are
outwith my control the ssh session disconnects every 24 hrs.

Therefore I run screen so after reconnecting I can recover to whereever
I was at.

However, the PuTTY window does not resize to whatever it was previously.
I can find lots of questions asking how to turn this feature off but
nothing on why it doesn't work for me.


As far as I know this is not a screen feature. Putty controls the window 
size, it is determined by the default or whatever is saved for that 
session. You can change what happens when you resize the putty window on 
the machine running putty. There is no way for you to change the putty 
screen from the debian side.


Screen does provide commands to resize the virtual terminal, however 
neither putty nor xterm seem to support the termcap commands (apparently 
it is Z0 and Z1).


What happens when you try to resize the screen using screen's windowing 
commands:


^a : width 50
^a : height 15

(control-a, then colon, then width 50, then enter)

In my case I get a message that
Your termcap does not specify how to change the terminal's width to 50
and
Your termcap does not specify how to change the terminal's height to 15

That is with TERM set to xterm.

Bijan



Re: gnu screen and resizing terminal window

2022-01-26 Thread Andrei POPESCU
On Mi, 26 ian 22, 18:45:41, Tim Woodall wrote:
> I have to use PuTTY to connect to a debian server. For reasons that are
> outwith my control the ssh session disconnects every 24 hrs.
> 
> Therefore I run screen so after reconnecting I can recover to whereever
> I was at.
> 
> However, the PuTTY window does not resize to whatever it was previously.
> I can find lots of questions asking how to turn this feature off but
> nothing on why it doesn't work for me.
> 
> I'm not wedded to screen - about the only feature I'm using is the
> scrollback buffer - so a change change to tmux is possible if that will
> help but I'd really like the resizing to work. Does this work for
> anyone?

As far as I recall the screen window size is limited to the smallest 
terminal that is connected to the particular session.

Maybe screen thinks there are other connections still present?

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


gnu screen and resizing terminal window

2022-01-26 Thread Tim Woodall

I have to use PuTTY to connect to a debian server. For reasons that are
outwith my control the ssh session disconnects every 24 hrs.

Therefore I run screen so after reconnecting I can recover to whereever
I was at.

However, the PuTTY window does not resize to whatever it was previously.
I can find lots of questions asking how to turn this feature off but
nothing on why it doesn't work for me.

I'm not wedded to screen - about the only feature I'm using is the
scrollback buffer - so a change change to tmux is possible if that will
help but I'd really like the resizing to work. Does this work for
anyone?

(long term I'm hoping to get permission to install cygwin and then use a
X server and xterms and ssh from inside them or, even better, a debian
laptop, but for now I'm stuck with putty)

Tim.



Re: Resizing partitions on a headless server

2015-06-21 Thread Pascal Hambourg
csanyi...@gmail.com a écrit :
 
 I want to create one LV for /usr and one LV for /var.
 But I can't create a LV with:
 # lvcreate --size 10.10G -n usr bubba
  Rounding up size to full physical extent 10.10 GiB
  /dev/bubba/usr: not found: device not cleared
  Aborting. Failed to wipe start of new LV.
  semid 1114120: semop failed for cookie 0xd4d6ff6: incorrect
   semaphore state
  Failed to set a proper state for notification semaphore
   identified by cookie value 223178742 (0xd4d6ff6) to initialize
   waiting for incoming notifications.
 
 I don't understand why can't create a new LV with this command abowe?

Never had this error before. From a quick search, it may be related to
hotplug/udev event processing. In which environment are you running this
command ? What happens if you add the option --noudevsync ?

 And don't understand why is successful following command?
 # lvcreate -vvv --size 10.10G -n usr bubba

-v just increases the verbosity, so it should not have any effect on
success or failure.

 I search on Internet and found another solution:
 # lvcreate -Zn --size 10.10G -n usr bubba
  Rounding up size to full physical extent 10.10 GiB
  WARNING: bubba/usr not zeroed
  Logical volume usr created
  semid 1146888: semop failed for cookie 0xd4d9b50: incorrect
   semaphore state
  Failed to set a proper state for notification semaphore
   identified by cookie value 223189840 (0xd4d9b50) to initialize
   waiting for incoming notifications.

That's not a solution, just a workaround to avoid the wiping error.

 Can I use now this newly created LV to make on it an ext4 filesystem
 despite the fact that it is not zeroed?

Yes, if it is created correctly and active. Check with lvs, lvdisplay.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/5586cc60.5090...@plouf.fr.eu.org



Re: Resizing partitions on a headless server

2015-06-21 Thread Gustavo S. L.
A small contribution, perhaps unnecessary. To change the size of a lvs with
lvreduce or lvextend is important to use the resize2fs and e2fsck command. Good
luck in solving the problem

On Sun, Jun 21, 2015 at 11:38 AM, Pascal Hambourg pas...@plouf.fr.eu.org
wrote:

 csanyi...@gmail.com a écrit :
 
  I want to create one LV for /usr and one LV for /var.
  But I can't create a LV with:
  # lvcreate --size 10.10G -n usr bubba
   Rounding up size to full physical extent 10.10 GiB
   /dev/bubba/usr: not found: device not cleared
   Aborting. Failed to wipe start of new LV.
   semid 1114120: semop failed for cookie 0xd4d6ff6: incorrect
semaphore state
   Failed to set a proper state for notification semaphore
identified by cookie value 223178742 (0xd4d6ff6) to initialize
waiting for incoming notifications.
 
  I don't understand why can't create a new LV with this command abowe?

 Never had this error before. From a quick search, it may be related to
 hotplug/udev event processing. In which environment are you running this
 command ? What happens if you add the option --noudevsync ?

  And don't understand why is successful following command?
  # lvcreate -vvv --size 10.10G -n usr bubba

 -v just increases the verbosity, so it should not have any effect on
 success or failure.

  I search on Internet and found another solution:
  # lvcreate -Zn --size 10.10G -n usr bubba
   Rounding up size to full physical extent 10.10 GiB
   WARNING: bubba/usr not zeroed
   Logical volume usr created
   semid 1146888: semop failed for cookie 0xd4d9b50: incorrect
semaphore state
   Failed to set a proper state for notification semaphore
identified by cookie value 223189840 (0xd4d9b50) to initialize
waiting for incoming notifications.

 That's not a solution, just a workaround to avoid the wiping error.

  Can I use now this newly created LV to make on it an ext4 filesystem
  despite the fact that it is not zeroed?

 Yes, if it is created correctly and active. Check with lvs, lvdisplay.


 --
 To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
 with a subject of unsubscribe. Trouble? Contact
 listmas...@lists.debian.org
 Archive: https://lists.debian.org/5586cc60.5090...@plouf.fr.eu.org




-- 
Gustavo Soares de Lima


Re: Resizing partitions on a headless server

2015-06-21 Thread csanyipal
Pascal Hambourg pas...@plouf.fr.eu.org writes:

 csanyi...@gmail.com a écrit :
 
 I want to create one LV for /usr and one LV for /var.
 But I can't create a LV with:
 # lvcreate --size 10.10G -n usr bubba
  Rounding up size to full physical extent 10.10 GiB
  /dev/bubba/usr: not found: device not cleared
  Aborting. Failed to wipe start of new LV.
  semid 1114120: semop failed for cookie 0xd4d6ff6: incorrect
   semaphore state
  Failed to set a proper state for notification semaphore
   identified by cookie value 223178742 (0xd4d6ff6) to initialize
   waiting for incoming notifications.
 
 I don't understand why can't create a new LV with this command abowe?

 Never had this error before. From a quick search, it may be related to
 hotplug/udev event processing. In which environment are you running this
 command ? What happens if you add the option --noudevsync ?

# lvcreate --noudevsync --size 10.10G -n var bubba
 Rounding up size to full physical extent 10.10 GiB
 /dev/bubba/var: not found: device not cleared
 Aborting. Failed to wipe start of new LV.

This is a Debian GNU/Linux Jessie on power pc headless box.
Furthermore, I don't think so so udev is properly setup on this system.

I red in /etc/udev/udev.conf the followings:
[quote]
# udevd is started in the initramfs, so when this file is modified the
# initramfs should be rebuilt.
[/quote]

In this file I remove the # from the beginning of line:
udev_log=info

but don't know how to rebuild initramfs?

In /boot I have these files:
8313E21.dtb
System.map-3.2.62-1
bubba.dtb
config-3.2.62-1
uImage

This is a power pc box on which booting process is started with u-boot.

Can I describe this environment better?

 And don't understand why is successful following command?
 # lvcreate -vvv --size 10.10G -n usr bubba

 -v just increases the verbosity, so it should not have any effect on
 success or failure.

 I search on Internet and found another solution:
 # lvcreate -Zn --size 10.10G -n usr bubba
  Rounding up size to full physical extent 10.10 GiB
  WARNING: bubba/usr not zeroed
  Logical volume usr created
  semid 1146888: semop failed for cookie 0xd4d9b50: incorrect
   semaphore state
  Failed to set a proper state for notification semaphore
   identified by cookie value 223189840 (0xd4d9b50) to initialize
   waiting for incoming notifications.

 That's not a solution, just a workaround to avoid the wiping error.

 Can I use now this newly created LV to make on it an ext4 filesystem
 despite the fact that it is not zeroed?

 Yes, if it is created correctly and active. Check with lvs, lvdisplay.

# lvs
 LV  VGAttr   LSize  Pool Origin Data%  Meta%  Move Log \
  Cpy%Sync Convert
 storage bubba -wi-a- 20.10g
 usr bubba -wi-a- 10.10g

# lvdisplay
 --- Logical volume ---
 LV Path/dev/bubba/usr
 LV Nameusr
 VG Namebubba
 LV UUIDEe83A0-H6J3-w4Xi-bt1f-3zGN-jVm4-DOUKxq
 LV Write Accessread/write
 LV Creation host, time b2, 2015-06-19 07:26:48 +0200
 LV Status  available
 # open 0
 LV Size10.10 GiB
 Current LE 2586
 Segments   1
 Allocation inherit
 Read ahead sectors auto
 - currently set to 256
 Block device   253:1

-- 
Regards from Pal


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/871th5t3ah@gmail.com



Re: Resizing partitions on a headless server

2015-06-18 Thread csanyipal
Pascal Hambourg pas...@plouf.fr.eu.org writes:

 csanyi...@gmail.com a écrit :
 
 Finally, I solved the problem by doing the followings:
 # lvresize --size 455.5G /dev/mapper/bubba-storage
 # e2fsck -f /dev/mapper/bubba-storage

 Glad you were lucky.

 What is my goal?
 
 Filesystem Size  Used Avail Use% Mounted on
 /dev/root  9.2G  8.0G  815M  91% /
 devtmpfs   125M 0  125M   0% /dev
 tmpfs  125M  4.0K  125M   1% /dev/shm
 tmpfs  125M  5.6M  120M   5% /run
 tmpfs  5.0M 0  5.0M   0% /run/lock
 tmpfs  125M 0  125M   0% /sys/fs/cgroup
 /dev/mapper/bubba-storage  449G  8.2G  418G   2% /home
 tmpfs   25M 0   25M   0% /run/user/1001

 As one can see, my /dev/root partition is almost full.
 I want to increase /dev/root partition to be maximum available size and
 decrease /home partition to only 20 GiB.
 
 So can be the /var directory large enough to encompass the web and other
 contents. 
 
 What are your advises, what do I do to reach my goal?

 Do not resize partitions. This is difficult and risky. Use LVM.
 Reduce the filesystem in the LV and the LV to a adequate size (without
 mistake this time).

I did this step successfully:
root@b2:~# pvdisplay
 --- Physical volume ---
 PV Name   /dev/sda2
 VG Name   bubba
 PV Size   455.43 GiB / not usable 3.65 MiB
 Allocatable   yes
 PE Size   4.00 MiB
 Total PE  116588
 Free PE   111442
 Allocated PE  5146
 PV UUID
 SMvR2K-6Z3c-xCgd-jSR2-kb1A-15a2-3RiS6V

root@b2:~# lvdisplay
 --- Logical volume ---
 LV Path/dev/bubba/storage
 LV Namestorage
 VG Namebubba
 LV UUID91yHxQ-RmOW-OeDv-jaIv-1z1B-KBSk-yCsDC6
 LV Write Accessread/write
 LV Creation host, time ,
 LV Status  available
 # open 1
 LV Size20.10 GiB
 Current LE 5146
 Segments   1
 Allocation inherit
 Read ahead sectors auto
 - currently set to 256
 Block device   253:0

 Create a new LV of adequate size. DON'T take all the available space in
 the VG. Leave some space for future needs. Increasing a LV and its
 filesystem is easy and can be done online while it's mounted. Reducing
 is risky, as you experienced.

I want to create one LV for /usr and one LV for /var.
But I can't create a LV with:
# lvcreate --size 10.10G -n usr bubba
 Rounding up size to full physical extent 10.10 GiB
 /dev/bubba/usr: not found: device not cleared
 Aborting. Failed to wipe start of new LV.
 semid 1114120: semop failed for cookie 0xd4d6ff6: incorrect
  semaphore state
 Failed to set a proper state for notification semaphore
  identified by cookie value 223178742 (0xd4d6ff6) to initialize
  waiting for incoming notifications.

I don't understand why can't create a new LV with this command abowe?

And don't understand why is successful following command?
# lvcreate -vvv --size 10.10G -n usr bubba

I search on Internet and found another solution:
# lvcreate -Zn --size 10.10G -n usr bubba
 Rounding up size to full physical extent 10.10 GiB
 WARNING: bubba/usr not zeroed
 Logical volume usr created
 semid 1146888: semop failed for cookie 0xd4d9b50: incorrect
  semaphore state
 Failed to set a proper state for notification semaphore
  identified by cookie value 223189840 (0xd4d9b50) to initialize
  waiting for incoming notifications.

Can I use now this newly created LV to make on it an ext4 filesystem
despite the fact that it is not zeroed?

-- 
Regards from Pal


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/87wpz0461s@gmail.com



Re: Resizing partitions on a headless server

2015-06-15 Thread Jonathan Dowland
On Mon, Jun 15, 2015 at 08:25:09AM +0200, csanyi...@gmail.com wrote:
 I bought the headless powerpc server here:
 http://www.excitostore.com/

If you mean the Excito B3, it would appear to be ARM, not PowerPC. 
That's good for you because ARM is still a supported architecture
in Debian, and PowerPC is not.

 I get the hardware preinstalled with Debian Sarge. The developers knows
 why did the partition so as is.

Really? Sarge is ancient. The website says it comes with Squeeze...


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20150615092931.gc21...@chew.redmars.org



Re: Resizing partitions on a headless server

2015-06-15 Thread csanyipal
Jonathan Dowland j...@debian.org writes:

 On Mon, Jun 15, 2015 at 08:25:09AM +0200, csanyi...@gmail.com wrote:
 I bought the headless powerpc server here:
 http://www.excitostore.com/

 If you mean the Excito B3, it would appear to be ARM, not PowerPC. 
 That's good for you because ARM is still a supported architecture
 in Debian, and PowerPC is not.

No B3 but B2 and B2 is power pc.

 I get the hardware preinstalled with Debian Sarge. The developers knows
 why did the partition so as is.

 Really? Sarge is ancient. The website says it comes with Squeeze...

Well, I can't remember which Debian distribution came with Bubba Two. It
came maybe with etch or lenny? Bubba 3 is a different story.

-- 
Regards from Pal


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/87r3pdrney@gmail.com



Re: Resizing partitions on a headless server

2015-06-15 Thread csanyipal
Jonathan Dowland j...@debian.org writes:

 On Mon, Jun 15, 2015 at 08:25:09AM +0200, csanyi...@gmail.com wrote:
 I bought the headless powerpc server here:
 http://www.excitostore.com/

 If you mean the Excito B3, it would appear to be ARM, not PowerPC. 
 That's good for you because ARM is still a supported architecture
 in Debian, and PowerPC is not.

Wrong, PowerPC is supported architecture:
https://www.debian.org/releases/stable/

-- 
Regards from Pal


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/87lhflrn8s@gmail.com



Re: Resizing partitions on a headless server

2015-06-15 Thread csanyipal
Pascal Hambourg pas...@plouf.fr.eu.org writes:

 csanyi...@gmail.com a écrit :
 
 Finally, I solved the problem by doing the followings:
 # lvresize --size 455.5G /dev/mapper/bubba-storage
 # e2fsck -f /dev/mapper/bubba-storage

 Glad you were lucky.

 Now, I can to use parted to resize my partitions.
 What is my goal?
 
 Filesystem Size  Used Avail Use% Mounted on
 /dev/root  9.2G  8.0G  815M  91% /
 devtmpfs   125M 0  125M   0% /dev
 tmpfs  125M  4.0K  125M   1% /dev/shm
 tmpfs  125M  5.6M  120M   5% /run
 tmpfs  5.0M 0  5.0M   0% /run/lock
 tmpfs  125M 0  125M   0% /sys/fs/cgroup
 /dev/mapper/bubba-storage  449G  8.2G  418G   2% /home
 tmpfs   25M 0   25M   0% /run/user/1001
 
 # fdisk -l
 
 Device Boot Start   End   Sectors   Size Id Type
 /dev/sda1  63  19551104  19551042   9.3G 83 Linux
 /dev/sda219551105 974647484 955096380 455.4G 8e Linux LVM
 /dev/sda3   974647485 976768064   2120580 1G 82 Linux swap / Solaris
 
 # lvs
   LV  VGAttr   LSize   Pool Origin Data%  Meta%  Move Log
   Cpy%Sync Convert
 storage bubba -wi-ao 455.40g
 
 # pvs
   PV VGFmt  Attr PSize   PFree
 /dev/sda2  bubba lvm2 a--  455.42g 20.00m

 I'm curious : what's the use of LVM if you have only one LV taking all
 the space in the VG, and plain partitions outside the VG ?

I bought the headless powerpc server here:
http://www.excitostore.com/

I get the hardware preinstalled with Debian Sarge. The developers knows
why did the partition so as is.

-- 
Regards from Pal


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/87wpz5o5ru@gmail.com



Re: Resizing partitions on a headless server

2015-06-15 Thread csanyipal
Pascal Hambourg pas...@plouf.fr.eu.org writes:

 csanyi...@gmail.com a écrit :
 
 Finally, I solved the problem by doing the followings:
 # lvresize --size 455.5G /dev/mapper/bubba-storage
 # e2fsck -f /dev/mapper/bubba-storage

 Glad you were lucky.

 Now, I can to use parted to resize my partitions.
 What is my goal?
 
 Filesystem Size  Used Avail Use% Mounted on
 /dev/root  9.2G  8.0G  815M  91% /
 devtmpfs   125M 0  125M   0% /dev
 tmpfs  125M  4.0K  125M   1% /dev/shm
 tmpfs  125M  5.6M  120M   5% /run
 tmpfs  5.0M 0  5.0M   0% /run/lock
 tmpfs  125M 0  125M   0% /sys/fs/cgroup
 /dev/mapper/bubba-storage  449G  8.2G  418G   2% /home
 tmpfs   25M 0   25M   0% /run/user/1001
 
 # fdisk -l
 
 Device Boot Start   End   Sectors   Size Id Type
 /dev/sda1  63  19551104  19551042   9.3G 83 Linux
 /dev/sda219551105 974647484 955096380 455.4G 8e Linux LVM
 /dev/sda3   974647485 976768064   2120580 1G 82 Linux swap / Solaris
 
 # lvs
   LV  VGAttr   LSize   Pool Origin Data%  Meta%  Move Log
   Cpy%Sync Convert
 storage bubba -wi-ao 455.40g
 
 # pvs
   PV VGFmt  Attr PSize   PFree
 /dev/sda2  bubba lvm2 a--  455.42g 20.00m

 I'm curious : what's the use of LVM if you have only one LV taking all
 the space in the VG, and plain partitions outside the VG ?

I bought the headless powerpc server here:
http://www.excitostore.com/

I get the hardware preinstalled with Debian Sarge. The developers knows
why did the partition so as is.

-- 
Regards from Pal


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/87egldo5dm@gmail.com



Re: Resizing partitions on a headless server

2015-06-14 Thread Gary Dale

On 14/06/15 12:40 AM, csanyi...@gmail.com wrote:

Gary Dale garyd...@torfree.net writes:


On 13/06/15 03:19 PM, csanyi...@gmail.com wrote:

Hello,

on my headless Debian GNU/Linux Jessie server I want to resize
partitions. So far I did followings:

root@b2:~# df -T
FilesystemType 1K-blocksUsed Available Use%
Mounted on
/dev/root ext3   9621848 8293064840008  91% /
devtmpfs  devtmpfs127800   0127800   0% /dev
tmpfs tmpfs   127880   4127876   1%
/dev/shm
tmpfs tmpfs   127880   17992109888  15% /run
tmpfs tmpfs 5120   0  5120   0%
/run/lock
tmpfs tmpfs   127880   0127880   0%
/sys/fs/cgroup
/dev/mapper/bubba-storage ext3 470050224 8512368 437660636   2%
/home
tmpfs tmpfs25576   0 25576   0%
/run/user/1001
tmpfs tmpfs25576   0 25576   0% /run/user/0

root@b2:~# umount /dev/mapper/bubba-storage

root@b2:~# resize2fs -p /dev/mapper/bubba-storage 20G
resize2fs 1.42.12 (29-Aug-2014)
Please run 'e2fsck -f /dev/mapper/bubba-storage' first.

root@b2:~# e2fsck -f /dev/mapper/bubba-storage
e2fsck 1.42.12 (29-Aug-2014)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Bubba_home: 114439/59703296 files (0.4% non-contiguous), \
4001648/119386112 blocks

At this step I think I forgot to run again:
root@b2:~# resize2fs -p /dev/mapper/bubba-storage 20G

root@b2:~# lvresize --size 2.1G /dev/mapper/bubba-storage
Rounding size to boundary between physical extents: 2.10 GiB
  WARNING: Reducing active logical volume to 2.10 GiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce storage? [y/n]: y
  Size of logical volume bubba/storage changed from 455.42 GiB
(116588 extents) to 2.10 GiB (538 extents).
  Logical volume storage successfully resized

Furthermore, I was wrong when I determined the --size to 2.1G in the
command abowe, because I wanted to write 20.1G instead.

root@b2:~# resize2fs -p /dev/mapper/bubba-storage
resize2fs 1.42.12 (29-Aug-2014)
resize2fs: New size smaller than minimum (2153070)

root@b2:~# mount /dev/mapper/bubba-storage

After these steps I rebooted the server but can't login on it with ssh
but only with serial cable.

Now, when I login on the serial console as non root user, I get
messages:

b2 login: csanyipal
Password:
EXT3-fs error (device dm-0): ext3_get_inode_loc: unable to read inode
block - inode=30752769, block=61505538
Last login: Sat Jun 13 14:06:27 CEST 2015 from 192.168.10.90 on pts/0
Linux b2 3.2.62-1 #1 Mon Aug 25 04:22:40 UTC 2014 ppc

The programs included with the Debian GNU/Linux system are free
software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
No mail.
EXT3-fs error (device dm-0): ext3_get_inode_loc: unable to read inode
block - inode=30752769, block=61505538
EXT3-fs error (device dm-0): ext3_get_inode_loc: unable to read inode
block - inode=30752769, block=61505538
EXT3-fs error (device dm-0): ext3_get_inode_loc: unable to read inode
block - inode=30752769, block=61505538
No directory, logging in with HOEXT3-fs error (device dm-0):
ext3_get_inode_loc: unable to read inode block - inode=30752769,
block=61505538
ME=/

Now what can I do to correct the partitions?


Boot from something like system rescue CD and try to fix the
damage. With any luck resize2fs didn't do anything. Hopefully you can
put the partitions back the way they were.

My headless powerpc box can't boot from CD because it hasn't CD
device. It only has USB drive. Furthermore, it can't boot with an usual
system rescue image installed on USB stick, because it uses uImage. I
tried systemrescuecd ( http://www.sysresccd.org ), gparted live to boot
with, but without success.

I think I have the possibility to use serial console only. There I can
run parted but don't know how to fix with it the problem I made.


Otherwise, there is always testdisk or your backups.

I just installed testdisk and try followings:
Select a media:

Disk /dev/sda - 500 GB / 465 GiB - WDC WD5000AACS-00G8B1

  Disk /dev/mapper/bubba-storage - 2256 MB / 2152 MiB - WDC \
   WD5000AACS-00G8B1
  Disk /dev/dm-0 - 2256 MB / 2152 MiB - WDC WD5000AACS-00G8B1


[Proceed ]

Please select the partition table type, press Enter when done.

[Humax  ] Humax partition table

Hint: Humax partition table type has been detected.

Disk /dev/sda - 500 GB / 465 GiB - WDC WD5000AACS-00G8B1
  CHS 60801 255 63 - sector size=512


[ Analyse  ]

Disk /dev/sda - 500 GB / 465 GiB - CHS 60801 255 63
Current partition structure:

Re: Resizing partitions on a headless server

2015-06-14 Thread csanyipal
csanyi...@gmail.com writes:

 Gary Dale garyd...@torfree.net writes:

 On 13/06/15 03:19 PM, csanyi...@gmail.com wrote:

[snipped]
 My headless powerpc box can't boot from CD because it hasn't CD
 device. It only has USB drive. Furthermore, it can't boot with an usual
 system rescue image installed on USB stick, because it uses uImage. I
 tried systemrescuecd ( http://www.sysresccd.org ), gparted live to boot
 with, but without success.

 I think I have the possibility to use serial console only. There I can
 run parted but don't know how to fix with it the problem I made.

 Otherwise, there is always testdisk or your backups.

 I just installed testdisk and try followings:

[snipped]
 Segmentation fault

 So, I can't use testdisk here.

Finally, I solved the problem by doing the followings:
# lvresize --size 455.5G /dev/mapper/bubba-storage
# e2fsck -f /dev/mapper/bubba-storage
# resize2fs -p /dev/mapper/bubba-storage
# reboot

So now I get back my /home partition and can ssh into my server.

Now, I can to use parted to resize my partitions.
What is my goal?

Filesystem Size  Used Avail Use% Mounted on
/dev/root  9.2G  8.0G  815M  91% /
devtmpfs   125M 0  125M   0% /dev
tmpfs  125M  4.0K  125M   1% /dev/shm
tmpfs  125M  5.6M  120M   5% /run
tmpfs  5.0M 0  5.0M   0% /run/lock
tmpfs  125M 0  125M   0% /sys/fs/cgroup
/dev/mapper/bubba-storage  449G  8.2G  418G   2% /home
tmpfs   25M 0   25M   0% /run/user/1001

# fdisk -l

Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x

Device Boot Start   End   Sectors   Size Id Type
/dev/sda1  63  19551104  19551042   9.3G 83 Linux
/dev/sda219551105 974647484 955096380 455.4G 8e Linux LVM
/dev/sda3   974647485 976768064   2120580 1G 82 Linux swap / Solaris

# lvs
  LV  VGAttr   LSize   Pool Origin Data%  Meta%  Move Log
  Cpy%Sync Convert
storage bubba -wi-ao 455.40g

# pvs
  PV VGFmt  Attr PSize   PFree
/dev/sda2  bubba lvm2 a--  455.42g 20.00m

As one can see, my /dev/root partition is almost full.
I want to increase /dev/root partition to be maximum available size and
decrease /home partition to only 20 GiB.

So can be the /var directory large enough to encompass the web and other
contents. 

What are your advises, what do I do to reach my goal?

-- 
Regards from Pal


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/87pp4ylblc@gmail.com



Re: Resizing partitions on a headless server

2015-06-14 Thread Gary Dale

On 14/06/15 08:26 AM, csanyi...@gmail.com wrote:

csanyi...@gmail.com writes:


Gary Dale garyd...@torfree.net writes:

On 13/06/15 03:19 PM, csanyi...@gmail.com wrote:

[snipped]

My headless powerpc box can't boot from CD because it hasn't CD
device. It only has USB drive. Furthermore, it can't boot with an usual
system rescue image installed on USB stick, because it uses uImage. I
tried systemrescuecd ( http://www.sysresccd.org ), gparted live to boot
with, but without success.

I think I have the possibility to use serial console only. There I can
run parted but don't know how to fix with it the problem I made.


Otherwise, there is always testdisk or your backups.

I just installed testdisk and try followings:

[snipped]

Segmentation fault

So, I can't use testdisk here.

Finally, I solved the problem by doing the followings:
# lvresize --size 455.5G /dev/mapper/bubba-storage
# e2fsck -f /dev/mapper/bubba-storage
# resize2fs -p /dev/mapper/bubba-storage
# reboot

So now I get back my /home partition and can ssh into my server.

Now, I can to use parted to resize my partitions.
What is my goal?

Filesystem Size  Used Avail Use% Mounted on
/dev/root  9.2G  8.0G  815M  91% /
devtmpfs   125M 0  125M   0% /dev
tmpfs  125M  4.0K  125M   1% /dev/shm
tmpfs  125M  5.6M  120M   5% /run
tmpfs  5.0M 0  5.0M   0% /run/lock
tmpfs  125M 0  125M   0% /sys/fs/cgroup
/dev/mapper/bubba-storage  449G  8.2G  418G   2% /home
tmpfs   25M 0   25M   0% /run/user/1001

# fdisk -l

Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x

Device Boot Start   End   Sectors   Size Id Type
/dev/sda1  63  19551104  19551042   9.3G 83 Linux
/dev/sda219551105 974647484 955096380 455.4G 8e Linux LVM
/dev/sda3   974647485 976768064   2120580 1G 82 Linux swap / Solaris

# lvs
   LV  VGAttr   LSize   Pool Origin Data%  Meta%  Move Log
   Cpy%Sync Convert
 storage bubba -wi-ao 455.40g

# pvs
   PV VGFmt  Attr PSize   PFree
 /dev/sda2  bubba lvm2 a--  455.42g 20.00m

As one can see, my /dev/root partition is almost full.
I want to increase /dev/root partition to be maximum available size and
decrease /home partition to only 20 GiB.

So can be the /var directory large enough to encompass the web and other
contents.

What are your advises, what do I do to reach my goal?

My advice is to leave well enough alone until such time as you are fully 
comfortable using the tools. Then proceed modifying one partition at a 
time and verifying that it has worked before trying to do anything else.


You've just wasted a lot of time trying to do too much at once. This is 
your data that you are playing with. Some extra umount/adjust/mount 
cycles are a small price to pay for minimizing the risk to your files.



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: https://lists.debian.org/557d816f.4030...@torfree.net



Re: Resizing partitions on a headless server

2015-06-14 Thread Pascal Hambourg
csanyi...@gmail.com a écrit :
 Hello,
 
 on my headless Debian GNU/Linux Jessie server I want to resize
 partitions.

Why ? The use of LVM should avoid the need to resize partitions (PVs).

 root@b2:~# e2fsck -f /dev/mapper/bubba-storage
 e2fsck 1.42.12 (29-Aug-2014)
 Pass 1: Checking inodes, blocks, and sizes
 Pass 2: Checking directory structure
 Pass 3: Checking directory connectivity
 Pass 4: Checking reference counts
 Pass 5: Checking group summary information
 Bubba_home: 114439/59703296 files (0.4% non-contiguous), \
 4001648/119386112 blocks
 
 At this step I think I forgot to run again:
 root@b2:~# resize2fs -p /dev/mapper/bubba-storage 20G
 
 root@b2:~# lvresize --size 2.1G /dev/mapper/bubba-storage
   Rounding size to boundary between physical extents: 2.10 GiB
 WARNING: Reducing active logical volume to 2.10 GiB
   THIS MAY DESTROY YOUR DATA (filesystem etc.)
   Do you really want to reduce storage? [y/n]: y
 Size of logical volume bubba/storage changed from 455.42 GiB
   (116588 extents) to 2.10 GiB (538 extents).
 Logical volume storage successfully resized
 
 Furthermore, I was wrong when I determined the --size to 2.1G in the
 command abowe, because I wanted to write 20.1G instead.

The bad news is that you probably screwed the filesystem. LVM provides
flexibility over plain partitions, but at the cost of complexity and is
less tolerant to such a mistake.

With a plain partition, all you would have to do to fix the mistake is
to extend the reduced partition (not the filesystem) to its original
size. However, with LVM, if you extend a reduced LV to its original
size, nothing guarantees that it will use the same physical blocks as
before. You can try, but it may not restore the filesystem's integrity.
Run fsck to check the damage.

Edit : check in /etc/lvm/backup for a metadata backup of the previous
situation of the VG bubba. Using it to restore the LV is beyond my
knowledge, but if your data is important and you don't have a backup
(sounds like an oxymoron), my advice is don't touch anything until you
find how to restore the LV. Otherwise, just extend the LV and recreate
the filesystem on it.

 Now what can I do to correct the partitions?

There is no partition to correct. The problem is in the LV bubba/storage
and its filesystem.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/557d7dbd.7020...@plouf.fr.eu.org



Re: Resizing partitions on a headless server

2015-06-14 Thread Pascal Hambourg
csanyi...@gmail.com a écrit :
 
 Finally, I solved the problem by doing the followings:
 # lvresize --size 455.5G /dev/mapper/bubba-storage
 # e2fsck -f /dev/mapper/bubba-storage

Glad you were lucky.

 Now, I can to use parted to resize my partitions.
 What is my goal?
 
 Filesystem Size  Used Avail Use% Mounted on
 /dev/root  9.2G  8.0G  815M  91% /
 devtmpfs   125M 0  125M   0% /dev
 tmpfs  125M  4.0K  125M   1% /dev/shm
 tmpfs  125M  5.6M  120M   5% /run
 tmpfs  5.0M 0  5.0M   0% /run/lock
 tmpfs  125M 0  125M   0% /sys/fs/cgroup
 /dev/mapper/bubba-storage  449G  8.2G  418G   2% /home
 tmpfs   25M 0   25M   0% /run/user/1001
 
 # fdisk -l
 
 Device Boot Start   End   Sectors   Size Id Type
 /dev/sda1  63  19551104  19551042   9.3G 83 Linux
 /dev/sda219551105 974647484 955096380 455.4G 8e Linux LVM
 /dev/sda3   974647485 976768064   2120580 1G 82 Linux swap / Solaris
 
 # lvs
   LV  VGAttr   LSize   Pool Origin Data%  Meta%  Move Log
   Cpy%Sync Convert
 storage bubba -wi-ao 455.40g
 
 # pvs
   PV VGFmt  Attr PSize   PFree
 /dev/sda2  bubba lvm2 a--  455.42g 20.00m

I'm curious : what's the use of LVM if you have only one LV taking all
the space in the VG, and plain partitions outside the VG ?

 As one can see, my /dev/root partition is almost full.
 I want to increase /dev/root partition to be maximum available size and
 decrease /home partition to only 20 GiB.
 
 So can be the /var directory large enough to encompass the web and other
 contents. 
 
 What are your advises, what do I do to reach my goal?

Do not resize partitions. This is difficult and risky. Use LVM.
Reduce the filesystem in the LV and the LV to a adequate size (without
mistake this time).
Create a new LV of adequate size. DON'T take all the available space in
the VG. Leave some space for future needs. Increasing a LV and its
filesystem is easy and can be done online while it's mounted. Reducing
is risky, as you experienced.
Move the data in /var from the root filesystem to the new LV and mount
it on /var. Update /etc/fstab accordingly.

Or :

Create a var directory in /home
Move the data in /var to /home/var
Bind-mount /home/var on /var and update /etc/fstab.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/557d8290.6070...@plouf.fr.eu.org



Re: Resizing partitions on a headless server

2015-06-14 Thread Gary Dale

On 14/06/15 09:12 AM, Pascal Hambourg wrote:

csanyi...@gmail.com a écrit :

Hello,

on my headless Debian GNU/Linux Jessie server I want to resize
partitions.

Why ? The use of LVM should avoid the need to resize partitions (PVs).


root@b2:~# e2fsck -f /dev/mapper/bubba-storage
e2fsck 1.42.12 (29-Aug-2014)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Bubba_home: 114439/59703296 files (0.4% non-contiguous), \
4001648/119386112 blocks

At this step I think I forgot to run again:
root@b2:~# resize2fs -p /dev/mapper/bubba-storage 20G

root@b2:~# lvresize --size 2.1G /dev/mapper/bubba-storage
   Rounding size to boundary between physical extents: 2.10 GiB
 WARNING: Reducing active logical volume to 2.10 GiB
   THIS MAY DESTROY YOUR DATA (filesystem etc.)
   Do you really want to reduce storage? [y/n]: y
 Size of logical volume bubba/storage changed from 455.42 GiB
   (116588 extents) to 2.10 GiB (538 extents).
 Logical volume storage successfully resized

Furthermore, I was wrong when I determined the --size to 2.1G in the
command abowe, because I wanted to write 20.1G instead.

The bad news is that you probably screwed the filesystem. LVM provides
flexibility over plain partitions, but at the cost of complexity and is
less tolerant to such a mistake.

With a plain partition, all you would have to do to fix the mistake is
to extend the reduced partition (not the filesystem) to its original
size. However, with LVM, if you extend a reduced LV to its original
size, nothing guarantees that it will use the same physical blocks as
before. You can try, but it may not restore the filesystem's integrity.
Run fsck to check the damage.

Edit : check in /etc/lvm/backup for a metadata backup of the previous
situation of the VG bubba. Using it to restore the LV is beyond my
knowledge, but if your data is important and you don't have a backup
(sounds like an oxymoron), my advice is don't touch anything until you
find how to restore the LV. Otherwise, just extend the LV and recreate
the filesystem on it.


Now what can I do to correct the partitions?

There is no partition to correct. The problem is in the LV bubba/storage
and its filesystem.


If you read the original post, it looks like the e2rsize failed. 
Therefor the only problem is the partition table is wrong.



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: https://lists.debian.org/557d8a79.4060...@torfree.net



Re: Resizing partitions on a headless server

2015-06-14 Thread csanyipal
Pascal Hambourg pas...@plouf.fr.eu.org writes:

 csanyi...@gmail.com a écrit :
 
 Finally, I solved the problem by doing the followings:
 # lvresize --size 455.5G /dev/mapper/bubba-storage
 # e2fsck -f /dev/mapper/bubba-storage

 Glad you were lucky.

 Now, I can to use parted to resize my partitions.
 What is my goal?
 
 Filesystem Size  Used Avail Use% Mounted on
 /dev/root  9.2G  8.0G  815M  91% /
 devtmpfs   125M 0  125M   0% /dev
 tmpfs  125M  4.0K  125M   1% /dev/shm
 tmpfs  125M  5.6M  120M   5% /run
 tmpfs  5.0M 0  5.0M   0% /run/lock
 tmpfs  125M 0  125M   0% /sys/fs/cgroup
 /dev/mapper/bubba-storage  449G  8.2G  418G   2% /home
 tmpfs   25M 0   25M   0% /run/user/1001
 
 # fdisk -l
 
 Device Boot Start   End   Sectors   Size Id Type
 /dev/sda1  63  19551104  19551042   9.3G 83 Linux
 /dev/sda219551105 974647484 955096380 455.4G 8e Linux LVM
 /dev/sda3   974647485 976768064   2120580 1G 82 Linux swap / Solaris
 
 # lvs
   LV  VGAttr   LSize   Pool Origin Data%  Meta%  Move Log
   Cpy%Sync Convert
 storage bubba -wi-ao 455.40g
 
 # pvs
   PV VGFmt  Attr PSize   PFree
 /dev/sda2  bubba lvm2 a--  455.42g 20.00m

 I'm curious : what's the use of LVM if you have only one LV taking all
 the space in the VG, and plain partitions outside the VG ?

I bought thie headless powerpc server here:
http://www.excitostore.com/

I get the hardware preinstalled with Debian Sarge. The developer knows
why did the partition so.

-- 
Regards from Pal


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/87bngikudq@gmail.com



Re: Resizing partitions on a headless server

2015-06-14 Thread Pascal Hambourg
csanyi...@gmail.com a écrit :
 Gary Dale garyd...@torfree.net writes:
 
 On 14/06/15 09:12 AM, Pascal Hambourg wrote:
 There is no partition to correct. The problem is in the LV bubba/storage
 and its filesystem.

 If you read the original post, it looks like the e2rsize
 failed. Therefor the only problem is the partition table is wrong.

No command mentionned by the OP ever modified the partition table.
Modifying LVM logical volumes and filesystems does not modify the
partition table. They have their own metadata.

 I think now is everything fixed, the partition table also. Am I right?

Hopefully yes. You were lucky this time.
The partition table was never modified.

 How can be sure? After reboot I can login as non root user, I can find
 my ( not so valuable ) data on /home, ..

Well, if fsck -f did not complain and your files are back, you can be
pretty confident.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/557dbc3f@plouf.fr.eu.org



Re: Resizing partitions on a headless server

2015-06-14 Thread csanyipal
Gary Dale garyd...@torfree.net writes:

 On 14/06/15 09:12 AM, Pascal Hambourg wrote:
 csanyi...@gmail.com a écrit :
 Hello,

 on my headless Debian GNU/Linux Jessie server I want to resize
 partitions.
 Why ? The use of LVM should avoid the need to resize partitions (PVs).

 root@b2:~# e2fsck -f /dev/mapper/bubba-storage
 e2fsck 1.42.12 (29-Aug-2014)
 Pass 1: Checking inodes, blocks, and sizes
 Pass 2: Checking directory structure
 Pass 3: Checking directory connectivity
 Pass 4: Checking reference counts
 Pass 5: Checking group summary information
 Bubba_home: 114439/59703296 files (0.4% non-contiguous), \
 4001648/119386112 blocks

 At this step I think I forgot to run again:
 root@b2:~# resize2fs -p /dev/mapper/bubba-storage 20G

 root@b2:~# lvresize --size 2.1G /dev/mapper/bubba-storage
Rounding size to boundary between physical extents: 2.10 GiB
  WARNING: Reducing active logical volume to 2.10 GiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce storage? [y/n]: y
  Size of logical volume bubba/storage changed from 455.42 GiB
(116588 extents) to 2.10 GiB (538 extents).
  Logical volume storage successfully resized

 Furthermore, I was wrong when I determined the --size to 2.1G in the
 command abowe, because I wanted to write 20.1G instead.
 The bad news is that you probably screwed the filesystem. LVM provides
 flexibility over plain partitions, but at the cost of complexity and is
 less tolerant to such a mistake.

 With a plain partition, all you would have to do to fix the mistake is
 to extend the reduced partition (not the filesystem) to its original
 size. However, with LVM, if you extend a reduced LV to its original
 size, nothing guarantees that it will use the same physical blocks as
 before. You can try, but it may not restore the filesystem's integrity.
 Run fsck to check the damage.

 Edit : check in /etc/lvm/backup for a metadata backup of the previous
 situation of the VG bubba. Using it to restore the LV is beyond my
 knowledge, but if your data is important and you don't have a backup
 (sounds like an oxymoron), my advice is don't touch anything until you
 find how to restore the LV. Otherwise, just extend the LV and recreate
 the filesystem on it.

 Now what can I do to correct the partitions?
 There is no partition to correct. The problem is in the LV bubba/storage
 and its filesystem.


 If you read the original post, it looks like the e2rsize
 failed. Therefor the only problem is the partition table is wrong.

I think now is everything fixed, the partition table also. Am I right?
How can be sure? After reboot I can login as non root user, I can find
my ( not so valuable ) data on /home, ..

-- 
Regards from Pal


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/87k2v6l4ne@gmail.com



Re: Resizing partitions on a headless server

2015-06-13 Thread Gary Dale

On 13/06/15 03:19 PM, csanyi...@gmail.com wrote:

Hello,

on my headless Debian GNU/Linux Jessie server I want to resize
partitions. So far I did followings:

root@b2:~# df -T
FilesystemType 1K-blocksUsed Available Use%
Mounted on
/dev/root ext3   9621848 8293064840008  91% /
devtmpfs  devtmpfs127800   0127800   0% /dev
tmpfs tmpfs   127880   4127876   1%
/dev/shm
tmpfs tmpfs   127880   17992109888  15% /run
tmpfs tmpfs 5120   0  5120   0%
/run/lock
tmpfs tmpfs   127880   0127880   0%
/sys/fs/cgroup
/dev/mapper/bubba-storage ext3 470050224 8512368 437660636   2%
/home
tmpfs tmpfs25576   0 25576   0%
/run/user/1001
tmpfs tmpfs25576   0 25576   0% /run/user/0

root@b2:~# umount /dev/mapper/bubba-storage

root@b2:~# resize2fs -p /dev/mapper/bubba-storage 20G
resize2fs 1.42.12 (29-Aug-2014)
Please run 'e2fsck -f /dev/mapper/bubba-storage' first.

root@b2:~# e2fsck -f /dev/mapper/bubba-storage
e2fsck 1.42.12 (29-Aug-2014)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Bubba_home: 114439/59703296 files (0.4% non-contiguous), \
4001648/119386112 blocks

At this step I think I forgot to run again:
root@b2:~# resize2fs -p /dev/mapper/bubba-storage 20G

root@b2:~# lvresize --size 2.1G /dev/mapper/bubba-storage
   Rounding size to boundary between physical extents: 2.10 GiB
 WARNING: Reducing active logical volume to 2.10 GiB
   THIS MAY DESTROY YOUR DATA (filesystem etc.)
   Do you really want to reduce storage? [y/n]: y
 Size of logical volume bubba/storage changed from 455.42 GiB
   (116588 extents) to 2.10 GiB (538 extents).
 Logical volume storage successfully resized

Furthermore, I was wrong when I determined the --size to 2.1G in the
command abowe, because I wanted to write 20.1G instead.

root@b2:~# resize2fs -p /dev/mapper/bubba-storage
resize2fs 1.42.12 (29-Aug-2014)
resize2fs: New size smaller than minimum (2153070)

root@b2:~# mount /dev/mapper/bubba-storage

After these steps I rebooted the server but can't login on it with ssh
but only with serial cable.

Now, when I login on the serial console as non root user, I get
messages:

b2 login: csanyipal
Password:
EXT3-fs error (device dm-0): ext3_get_inode_loc: unable to read inode
block - inode=30752769, block=61505538
Last login: Sat Jun 13 14:06:27 CEST 2015 from 192.168.10.90 on pts/0
Linux b2 3.2.62-1 #1 Mon Aug 25 04:22:40 UTC 2014 ppc

The programs included with the Debian GNU/Linux system are free
software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
No mail.
EXT3-fs error (device dm-0): ext3_get_inode_loc: unable to read inode
block - inode=30752769, block=61505538
EXT3-fs error (device dm-0): ext3_get_inode_loc: unable to read inode
block - inode=30752769, block=61505538
EXT3-fs error (device dm-0): ext3_get_inode_loc: unable to read inode
block - inode=30752769, block=61505538
No directory, logging in with HOEXT3-fs error (device dm-0):
ext3_get_inode_loc: unable to read inode block - inode=30752769,
block=61505538
ME=/

Now what can I do to correct the partitions?

Boot from something like system rescue CD and try to fix the damage. 
With any luck resize2fs didn't do anything. Hopefully you can put the 
partitions back the way they were.


Otherwise, there is always testdisk or your backups.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: https://lists.debian.org/557cf6de.8030...@torfree.net



Re: Resizing partitions on a headless server

2015-06-13 Thread csanyipal
Gary Dale garyd...@torfree.net writes:

 On 13/06/15 03:19 PM, csanyi...@gmail.com wrote:
 Hello,

 on my headless Debian GNU/Linux Jessie server I want to resize
 partitions. So far I did followings:

 root@b2:~# df -T
 FilesystemType 1K-blocksUsed Available Use%
 Mounted on
 /dev/root ext3   9621848 8293064840008  91% /
 devtmpfs  devtmpfs127800   0127800   0% /dev
 tmpfs tmpfs   127880   4127876   1%
 /dev/shm
 tmpfs tmpfs   127880   17992109888  15% /run
 tmpfs tmpfs 5120   0  5120   0%
 /run/lock
 tmpfs tmpfs   127880   0127880   0%
 /sys/fs/cgroup
 /dev/mapper/bubba-storage ext3 470050224 8512368 437660636   2%
 /home
 tmpfs tmpfs25576   0 25576   0%
 /run/user/1001
 tmpfs tmpfs25576   0 25576   0% 
 /run/user/0

 root@b2:~# umount /dev/mapper/bubba-storage

 root@b2:~# resize2fs -p /dev/mapper/bubba-storage 20G
 resize2fs 1.42.12 (29-Aug-2014)
 Please run 'e2fsck -f /dev/mapper/bubba-storage' first.

 root@b2:~# e2fsck -f /dev/mapper/bubba-storage
 e2fsck 1.42.12 (29-Aug-2014)
 Pass 1: Checking inodes, blocks, and sizes
 Pass 2: Checking directory structure
 Pass 3: Checking directory connectivity
 Pass 4: Checking reference counts
 Pass 5: Checking group summary information
 Bubba_home: 114439/59703296 files (0.4% non-contiguous), \
 4001648/119386112 blocks

 At this step I think I forgot to run again:
 root@b2:~# resize2fs -p /dev/mapper/bubba-storage 20G

 root@b2:~# lvresize --size 2.1G /dev/mapper/bubba-storage
Rounding size to boundary between physical extents: 2.10 GiB
  WARNING: Reducing active logical volume to 2.10 GiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce storage? [y/n]: y
  Size of logical volume bubba/storage changed from 455.42 GiB
(116588 extents) to 2.10 GiB (538 extents).
  Logical volume storage successfully resized

 Furthermore, I was wrong when I determined the --size to 2.1G in the
 command abowe, because I wanted to write 20.1G instead.

 root@b2:~# resize2fs -p /dev/mapper/bubba-storage
 resize2fs 1.42.12 (29-Aug-2014)
 resize2fs: New size smaller than minimum (2153070)

 root@b2:~# mount /dev/mapper/bubba-storage

 After these steps I rebooted the server but can't login on it with ssh
 but only with serial cable.

 Now, when I login on the serial console as non root user, I get
 messages:

 b2 login: csanyipal
 Password:
 EXT3-fs error (device dm-0): ext3_get_inode_loc: unable to read inode
 block - inode=30752769, block=61505538
 Last login: Sat Jun 13 14:06:27 CEST 2015 from 192.168.10.90 on pts/0
 Linux b2 3.2.62-1 #1 Mon Aug 25 04:22:40 UTC 2014 ppc

 The programs included with the Debian GNU/Linux system are free
 software;
 the exact distribution terms for each program are described in the
 individual files in /usr/share/doc/*/copyright.

 Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
 permitted by applicable law.
 No mail.
 EXT3-fs error (device dm-0): ext3_get_inode_loc: unable to read inode
 block - inode=30752769, block=61505538
 EXT3-fs error (device dm-0): ext3_get_inode_loc: unable to read inode
 block - inode=30752769, block=61505538
 EXT3-fs error (device dm-0): ext3_get_inode_loc: unable to read inode
 block - inode=30752769, block=61505538
 No directory, logging in with HOEXT3-fs error (device dm-0):
 ext3_get_inode_loc: unable to read inode block - inode=30752769,
 block=61505538
 ME=/

 Now what can I do to correct the partitions?

 Boot from something like system rescue CD and try to fix the
 damage. With any luck resize2fs didn't do anything. Hopefully you can
 put the partitions back the way they were.

My headless powerpc box can't boot from CD because it hasn't CD
device. It only has USB drive. Furthermore, it can't boot with an usual
system rescue image installed on USB stick, because it uses uImage. I
tried systemrescuecd ( http://www.sysresccd.org ), gparted live to boot
with, but without success.

I think I have the possibility to use serial console only. There I can
run parted but don't know how to fix with it the problem I made.

 Otherwise, there is always testdisk or your backups.

I just installed testdisk and try followings:
Select a media:
Disk /dev/sda - 500 GB / 465 GiB - WDC WD5000AACS-00G8B1
 Disk /dev/mapper/bubba-storage - 2256 MB / 2152 MiB - WDC \
  WD5000AACS-00G8B1
 Disk /dev/dm-0 - 2256 MB / 2152 MiB - WDC WD5000AACS-00G8B1

[Proceed ]
Please select the partition table type, press Enter when done.
[Humax  ] Humax partition table
Hint: Humax partition table type has been detected.

Disk /dev/sda - 500 GB / 465 GiB - WDC WD5000AACS-00G8B1
 CHS 60801 255 63 - sector size=512

[ Analyse  ]

Disk /dev/sda - 500 GB / 465 GiB - CHS 60801 255 63
Current partition 

Resizing partitions on a headless server

2015-06-13 Thread csanyipal
Hello,

on my headless Debian GNU/Linux Jessie server I want to resize
partitions. So far I did followings:

root@b2:~# df -T
FilesystemType 1K-blocksUsed Available Use%
Mounted on
/dev/root ext3   9621848 8293064840008  91% /
devtmpfs  devtmpfs127800   0127800   0% /dev
tmpfs tmpfs   127880   4127876   1%
/dev/shm
tmpfs tmpfs   127880   17992109888  15% /run
tmpfs tmpfs 5120   0  5120   0%
/run/lock
tmpfs tmpfs   127880   0127880   0%
/sys/fs/cgroup
/dev/mapper/bubba-storage ext3 470050224 8512368 437660636   2%
/home
tmpfs tmpfs25576   0 25576   0%
/run/user/1001
tmpfs tmpfs25576   0 25576   0% /run/user/0

root@b2:~# umount /dev/mapper/bubba-storage

root@b2:~# resize2fs -p /dev/mapper/bubba-storage 20G
resize2fs 1.42.12 (29-Aug-2014)
Please run 'e2fsck -f /dev/mapper/bubba-storage' first.

root@b2:~# e2fsck -f /dev/mapper/bubba-storage
e2fsck 1.42.12 (29-Aug-2014)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Bubba_home: 114439/59703296 files (0.4% non-contiguous), \
4001648/119386112 blocks

At this step I think I forgot to run again:
root@b2:~# resize2fs -p /dev/mapper/bubba-storage 20G

root@b2:~# lvresize --size 2.1G /dev/mapper/bubba-storage
  Rounding size to boundary between physical extents: 2.10 GiB
WARNING: Reducing active logical volume to 2.10 GiB
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
  Do you really want to reduce storage? [y/n]: y
Size of logical volume bubba/storage changed from 455.42 GiB
  (116588 extents) to 2.10 GiB (538 extents).
Logical volume storage successfully resized

Furthermore, I was wrong when I determined the --size to 2.1G in the
command abowe, because I wanted to write 20.1G instead.

root@b2:~# resize2fs -p /dev/mapper/bubba-storage
resize2fs 1.42.12 (29-Aug-2014)
resize2fs: New size smaller than minimum (2153070)

root@b2:~# mount /dev/mapper/bubba-storage

After these steps I rebooted the server but can't login on it with ssh
but only with serial cable.

Now, when I login on the serial console as non root user, I get
messages:

b2 login: csanyipal
Password:
EXT3-fs error (device dm-0): ext3_get_inode_loc: unable to read inode
block - inode=30752769, block=61505538
Last login: Sat Jun 13 14:06:27 CEST 2015 from 192.168.10.90 on pts/0
Linux b2 3.2.62-1 #1 Mon Aug 25 04:22:40 UTC 2014 ppc

The programs included with the Debian GNU/Linux system are free
software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
No mail.
EXT3-fs error (device dm-0): ext3_get_inode_loc: unable to read inode
block - inode=30752769, block=61505538
EXT3-fs error (device dm-0): ext3_get_inode_loc: unable to read inode
block - inode=30752769, block=61505538
EXT3-fs error (device dm-0): ext3_get_inode_loc: unable to read inode
block - inode=30752769, block=61505538
No directory, logging in with HOEXT3-fs error (device dm-0):
ext3_get_inode_loc: unable to read inode block - inode=30752769,
block=61505538
ME=/

Now what can I do to correct the partitions?

-- 
Regards from Pal



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/87vberwh41@gmail.com



Re: Resizing LVM issue

2014-07-02 Thread Pascal Hambourg
Miroslav Skoric a écrit :
 On 06/22/2014 03:29 PM, Pascal Hambourg wrote:
 
 You should not have allocated all the space in the VG but instead should
 have left some free space for further growing or creating LVs when the
 need arises.
 
 Let's try once again: I have not allocated anything at the time of 
 installation.

Yes you have, by accepting the installer's suggestion :

 The only thing I've done was to accept Debian installer's 
 suggestion to make the OS installation by using the whole hard disk and 
 make the LVM. (In the other words, I let the installer to calculate 
 particular partitions.)

 I see now some of you telling it should not be 
 done that way, but would not be better to blame the programmers who had 
 made such a 'bad option' within the installer?

You as the user have the final choice. You have to decide if the
installer's suggestion fits your needs and constraints. The installer
doesn't know about them.

 Secondly, either the installer and/or some online manuals had suggested 
 that the main purpose of LVM was to allow additional reallocating space 
 within the OS's partitions, later if and when needed, from within an 
 already working system.

Yes, but LVs usually contain filesystems, and as you have seen, online
shrinking of a mounted filesystem is often difficult or impossible. So
it is better to avoid this kind of situation.

If you can grow the PV (e.g. by adding a new disk) when you need to grow
a LV, then it is fine to allocate all the space of the initial PV.
However if you cannot grow the PV, then it is better to leave some of
the PV space initially unallocated and to grow LVs from that free space
when needed.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/53b478b8.9000...@plouf.fr.eu.org



Re: Resizing LVM issue

2014-06-29 Thread Miroslav Skoric

On 06/22/2014 03:29 PM, Pascal Hambourg wrote:


Miroslav Skoric a écrit :


1. What would you do if you need more space in /tmp and you know you
have some spare space in /home or else, but do not want to reinstall?


If you are in such a situation, then you missed one of the goals of LVM.
You should not have allocated all the space in the VG but instead should
have left some free space for further growing or creating LVs when the
need arises.



Let's try once again: I have not allocated anything at the time of 
installation. The only thing I've done was to accept Debian installer's 
suggestion to make the OS installation by using the whole hard disk and 
make the LVM. (In the other words, I let the installer to calculate 
particular partitions.) I see now some of you telling it should not be 
done that way, but would not be better to blame the programmers who had 
made such a 'bad option' within the installer?


Secondly, either the installer and/or some online manuals had suggested 
that the main purpose of LVM was to allow additional reallocating space 
within the OS's partitions, later if and when needed, from within an 
already working system. If that's not the case, I'd suggest the 
programmer community to invent a better solution.


Anyway, as I said earlier, I managed to restore the space to the 
original status, then I reallocated things in the proper order. So, in a 
couple of days it will be a month since I fixed my wrong attempt to 
reallocate the space, and the machine is not complaining ever since :0




--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: https://lists.debian.org/53b030f...@eunet.rs



Installation disc creator (Was: Resizing LVM issue)

2014-06-22 Thread Miroslav Skoric

On 06/15/2014 10:52 PM, Reco wrote:



No, it seems to belong to main archive.

$ apt-cache search apt on cd | grep ^apt
apt - commandline package manager
aptdaemon - transaction based package management service
aptoncd - Installation disc creator for packages downloaded via APT



Yep, aptoncd was the one that asked for more space in /tmp. Not anymore 
after resizing. I use that app mostly for updating the other machine 
that does not have broadband access (dial-up is there but too slow for 
updating). Btw, what app is good for making an image of the system, sort 
of full backup, and is it possible to use such an image to clone more 
than one comp later, i.e. to avoid installations from scratch? (I have 
two Debian machines here and another one with Ubuntu, and maybe would go 
moving that Ubuntu to Debian but don't like reinstalling all over again...)


M.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: https://lists.debian.org/53a6c0f6.3030...@eunet.rs



Re: Resizing LVM issue

2014-06-22 Thread Pascal Hambourg
Miroslav Skoric a écrit :
 
 1. What would you do if you need more space in /tmp and you know you 
 have some spare space in /home or else, but do not want to reinstall?

If you are in such a situation, then you missed one of the goals of LVM.
You should not have allocated all the space in the VG but instead should
have left some free space for further growing or creating LVs when the
need arises.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/53a6da43.40...@plouf.fr.eu.org



Re: Resizing LVM issue

2014-06-22 Thread Pascal Hambourg
Bob Proulx a écrit :
 
 There are many stories of this from people doing the same thing on the
 net.  It seems that the code for expanding the file system is used
 often and optimized to run fast but that the code for shrinking it is
 not used very often and therefore has severe inefficiencies.  But if
 you wait long enough, more than a week in my case, then it will finish
 successfully.

Regardless of any optimization, shrinking a filesystem is much more
difficult that expanding it. It requires to move all the used blocks
which are allocated beyond the new size. Moving blocks on the same disk
is a rather slow operation.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/53a6db85.8010...@plouf.fr.eu.org



Re: Installation disc creator (Was: Resizing LVM issue)

2014-06-22 Thread Linux-Fan
On 06/22/2014 01:41 PM, Miroslav Skoric wrote:
 On 06/15/2014 10:52 PM, Reco wrote:
 Btw, what app is good for making an image of the system, sort
 of full backup, and is it possible to use such an image to clone more
 than one comp later, i.e. to avoid installations from scratch? (I have
 two Debian machines here and another one with Ubuntu, and maybe would go
 moving that Ubuntu to Debian but don't like reinstalling all over again...)
 
 M.

You could try ``Remastersys''(http://remastersys.com/), although it is
currently not maintained AFAICT, it still works here. As I also use it
very often -- should it finally go offline, I am going to keep at least
a modified Debian version running.

A cleaner approach (which I have tried multiple times without success
already) could be a customized installation disc.

HTH
Linux-Fan



signature.asc
Description: OpenPGP digital signature


Re: Resizing LVM issue

2014-06-16 Thread Rick Thomas

On Jun 15, 2014, at 12:34 PM, Bob Proulx b...@proulx.com wrote:

 In my case I had read the documentation.  I had resized smaller
 partitions successfully.  I had no idea it would take more than a week
 of 24x7 runtime before completing.  If I had I would have done it
 differently.  Which is why I am noting it here as the topic came up.
 To forewarn others.  If I had only known then what I know now I would
 have copied it off and then back after resizing.  Experience is
 sometimes the scars left behind after having done things poorly the
 first time.

The optimum strategy is probably to make a full backup, then start the resize.  
If the resize looks like it's going to take too long, you can always stop it, 
write-off the mess left behind by the incomplete resize, re-partition, and 
restore from the backup.  If the resize finishes in a reasonable amount of 
time, you go ahead and re-partition and don't have to do any restores.

Worst case scenario -- you've lost the time spent in the incomplete resize.  
Best case scenario, you've lost the time spent in doing the backup, but you've 
gained the time you would have spent in doing the restore.

Since you can't predict hardware/power failures with 100% certainty, doing the 
backup before you start is a good idea anyway, whether or not you actually need 
it in the end.

Rick

--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/577bd307-6bf3-4809-abab-7d5435554...@pobox.com



Re: Resizing LVM issue

2014-06-15 Thread Chris Bannister
On Sun, Jun 15, 2014 at 12:00:12AM +0200, Miroslav Skoric wrote:
 (Btw, the app apt-on-CD recently started to ask for more space in /tmp.
 After resizing, that app seems to be happy :-)

tal% apt-cache search apt-on-CD
tal%

Third party?

-- 
If you're not careful, the newspapers will have you hating the people
who are being oppressed, and loving the people who are doing the 
oppressing. --- Malcolm X


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140615160319.GF29225@tal



Re: Resizing LVM issue

2014-06-15 Thread Bob Proulx
Miroslav Skoric wrote:
 1. What would you do if you need more space in /tmp and you know you have
 some spare space in /home or else, but do not want to reinstall?

No need to re-install.  Brute force works.  I would use a second disk
large enough to hold everything.  Copy off the old, repartition, then
copy back to the smaller sized partition.

 2. Wouldn't be nice if resizing routines/commands/programs/... show
 calculated time they would need for such an operation, so a user could
 decide whether to continue or cancel?

Yes.  Of course.  But it must be possible to estimate this.  Sometimes
the only way to know is to actually do the work and it isn't known
until then.  And someone must actually do the work of coding it up.

In the case of the ext2 resize2fs the problem is mainly due to some
inefficiency in the implemented algorithm.  The expansion is well used
and quite fast.  But the shrink code is only rarely used.  The shrink
code is not optimized and hasn't had much attention.  If someone were
to get into that code base and review it I am confident they would
find some type of nested loop causing it to operate in nasty
exponential time that could be entirely avoided but is currently
implemented in a most brute force and inefficient way.

For example anyone who has ever implemented an AVL tree will know that
supporting adding elements is quite easy.  But supporting deleting
elements is quite a bit more work.  Many things are asymmetrical that
way.  It is the 80%-20% rule.  80% of the work takes 20% of the time.
The remaining 20% of the work takes 80% of the time.

In my case I had read the documentation.  I had resized smaller
partitions successfully.  I had no idea it would take more than a week
of 24x7 runtime before completing.  If I had I would have done it
differently.  Which is why I am noting it here as the topic came up.
To forewarn others.  If I had only known then what I know now I would
have copied it off and then back after resizing.  Experience is
sometimes the scars left behind after having done things poorly the
first time.

Bob


signature.asc
Description: Digital signature


Re: Resizing LVM issue

2014-06-15 Thread Reco
On Mon, 16 Jun 2014 04:03:19 +1200
Chris Bannister cbannis...@slingshot.co.nz wrote:

 On Sun, Jun 15, 2014 at 12:00:12AM +0200, Miroslav Skoric wrote:
  (Btw, the app apt-on-CD recently started to ask for more space in /tmp.
  After resizing, that app seems to be happy :-)
 
 tal% apt-cache search apt-on-CD
 tal%
 
 Third party?

No, it seems to belong to main archive.

$ apt-cache search apt on cd | grep ^apt
apt - commandline package manager
aptdaemon - transaction based package management service
aptoncd - Installation disc creator for packages downloaded via APT

Reco


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/20140616005239.ae8dac92d5a1661f543dc...@gmail.com



Re: Resizing LVM issue

2014-06-14 Thread Miroslav Skoric

On 06/05/2014 11:29 AM, Jochen Spieker wrote:


Miroslav Skoric:

On 06/01/2014 11:36 PM, Jochen Spieker wrote:



If you don't have a
backup you can try to resize the LV again to its original size and hope
for the best.


Thanks for suggestions. Yep, I managed to return back to the
original size at first. Then I resized it properly (incl. leaving
few gigs as unused space). e2fsck did spent a while to recheck each
partition but seems that everything is in order now. We'll see in
days to come ...


Nice! It is still possible that some of your data was overwritten but if
the fsck didn't find anything troubling you are probably safe now.

Next todo: implement a useful backup strategy. :)

J.



Just to let you know that after some ten days after 2nd resizing 
everything is still in order (no complaints from fsck or else). From the 
lesson learned: The proper order of commands should be carefully 
performed; In that case, resizing the LVM is a good option until the 
installation process improves itself in a way it automatically format 
new partitions to be better balanced. (I mean, if I remember properly, 
during the initial system installation some years ago ... it was 6.0.1a 
that I upgraded to 7.5 since ... it offered to setup the LVM partitions 
automatically. So I accepted its recommendation.) But recently I 
realized that I needed more space in /tmp and found that I had more than 
enough free space in /home ... that was the reason for resizing.


(Btw, the app apt-on-CD recently started to ask for more space in /tmp. 
After resizing, that app seems to be happy :-)


M.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: https://lists.debian.org/539cc5ec.30...@eunet.rs



Re: Resizing LVM issue

2014-06-14 Thread Miroslav Skoric

On 06/05/2014 11:04 AM, Bob Proulx wrote:


Richard Hector wrote:

I prefer not to get in the situation where I have to shrink a filesystem
though - xfs doesn't support it anyway.


Agreed.  Even better is to avoid it.  Small ext{3,4} file systems
shrink acceptably well.  But larger ext{3,4} file systems can take a
very long time to shrink.  I once made the mistake of trying to shrink
a 600G filesystem.  The operation was eventually successful.  But it
took 10 days to complete!  And once I started it there was no other
option than to let it complete.



Two questions:

1. What would you do if you need more space in /tmp and you know you 
have some spare space in /home or else, but do not want to reinstall?


2. Wouldn't be nice if resizing routines/commands/programs/... show 
calculated time they would need for such an operation, so a user could 
decide whether to continue or cancel?


M.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: https://lists.debian.org/539cc14a.3010...@eunet.rs



Re: Resizing LVM issue

2014-06-14 Thread Miroslav Skoric

On 06/05/2014 08:42 AM, Richard Hector wrote:




If I have to shrink a filesystem, I tend to shrink it to something
smaller than my eventual goal, then shrink the LV to the goal, then
resize2fs again without specifying the size, so it grows to fit.

I prefer not to get in the situation where I have to shrink a filesystem
though - xfs doesn't support it anyway.

Richard




Thanks for suggestions. Well I would not shrink the filesystem (actually 
a part of it) if I did not need more space on /tmp (as a part of the 
LVM). Anyway, after this experience, may I suggest to LVM programmers to 
think about some software routines that would enable users to recompose 
(resize, shrink, whatever ...) their LVM from within a mounted system, 
in a way that after the next reboot, the LVM and FS automatically 
recomposes itself - so to avoid common mistakes.


M.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: https://lists.debian.org/539cbfa4.7060...@eunet.rs



Re: Resizing LVM issue

2014-06-14 Thread Bzzzz
On Sat, 14 Jun 2014 23:40:26 +0200
Miroslav Skoric sko...@eunet.rs wrote:

 1. What would you do if you need more space in /tmp and you know
 you have some spare space in /home or else, but do not want to
 reinstall?

http://www.yourhowto.net/increase-tmp-partition-size-linux/

However, adding some GB of RAM would be better
(to extend tmpfs).

 2. Wouldn't be nice if resizing routines/commands/programs/...
 show calculated time they would need for such an operation, so a
 user could decide whether to continue or cancel?

This would be only be empiric (or take a huge algorithm and
a lot of CPU/RAM to correctly compute).

-- 
ju the Lord does not touch many people nowadays
fab priests take care of this for him


signature.asc
Description: PGP signature


Re: Resizing LVM issue

2014-06-14 Thread Stan Hoeppner
On 6/14/2014 4:33 PM, Miroslav Skoric wrote:
 ...may I suggest to LVM programmers to
 think about some software routines that would enable users to recompose
 (resize, shrink, whatever ...) their LVM from within a mounted system,
 in a way that after the next reboot, the LVM and FS automatically
 recomposes itself - so to avoid common mistakes.

This is not possible.  A filesystem must be shrunk before the underlying
storage device.  If you shrink the LV first then portions of the
filesystem will now map to non-existent sectors.  If files exist in
those sectors they will be lost.  Same goes for filesystem metadata.

It is possible to add sectors to a device under a mounted filesystem
because the filesystem has no knowledge of them, and is not mapping
them.  The same is not true of removing sectors under a mounted
filesystem, for the reason above.

Cheers,

Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/539cd28a.20...@hardwarefreak.com



Re: Resizing LVM issue

2014-06-14 Thread Jochen Spieker
Miroslav Skoric:
 
 Two questions:
 
 1. What would you do if you need more space in /tmp and you know you
 have some spare space in /home or else, but do not want to
 reinstall?

If it's only temporarily, I would probably do a bind-mount. Just create
~/tmp-tmp, as root cp -a /tmp ~/tmp-tmp/, mount -o bind ~/tmp-tmp/tmp
/tmp. (Sorry for the many tmps. :))

But I never had that issue in several years running sid on a laptop with
4GB RAM and /tmp as tmpfs.

 2. Wouldn't be nice if resizing routines/commands/programs/... show
 calculated time they would need for such an operation, so a user
 could decide whether to continue or cancel?

They would do that if there was a way to know in advance. I don't think
it's possible to do more than a wild guess which I assume could still be
off by a factor of two or more.

J.
-- 
When standing at the top of beachy head I find the rocks below very
attractive.
[Agree]   [Disagree]
 http://www.slowlydownward.com/NODATA/data_enter2.html


signature.asc
Description: Digital signature


Re: Resizing LVM issue

2014-06-05 Thread Richard Hector
On 05/06/14 10:17, Miroslav Skoric wrote:
 On 06/01/2014 11:03 PM, emmanuel segura wrote:
 

 i think the correct steps are:

 resize2fs /dev/mapper/localhost-home -2G
 lvresize --size -2G /dev/mapper/localhost-home


 
 Thank you. I tried with the first command but it did not work (it
 returned an error).

Pasting the error into your email is almost always beneficial :-)

However, I don't think resize2fs can do relative sizes like that - you
need to calculate the new size yourself, and specify it, without the
minus sign.

If I have to shrink a filesystem, I tend to shrink it to something
smaller than my eventual goal, then shrink the LV to the goal, then
resize2fs again without specifying the size, so it grows to fit.

I prefer not to get in the situation where I have to shrink a filesystem
though - xfs doesn't support it anyway.

Richard


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/5390113a.1050...@walnut.gen.nz



Re: Resizing LVM issue

2014-06-05 Thread Bob Proulx
Richard Hector wrote:
 I prefer not to get in the situation where I have to shrink a filesystem
 though - xfs doesn't support it anyway.

Agreed.  Even better is to avoid it.  Small ext{3,4} file systems
shrink acceptably well.  But larger ext{3,4} file systems can take a
very long time to shrink.  I once made the mistake of trying to shrink
a 600G filesystem.  The operation was eventually successful.  But it
took 10 days to complete!  And once I started it there was no other
option than to let it complete.

There are many stories of this from people doing the same thing on the
net.  It seems that the code for expanding the file system is used
often and optimized to run fast but that the code for shrinking it is
not used very often and therefore has severe inefficiencies.  But if
you wait long enough, more than a week in my case, then it will finish
successfully.

If I ever need to shrink an ext{3,4} file system again I will not
shrink it.  I will instead create a new file system of the desired
size and then copy from the old to the new.  That is reliable and will
complete in much less time than shrinking an existing file system.

Bob


signature.asc
Description: Digital signature


Re: Resizing LVM issue

2014-06-05 Thread Jochen Spieker
Miroslav Skoric:
 On 06/01/2014 11:36 PM, Jochen Spieker wrote:
 
 
 If you don't have a
 backup you can try to resize the LV again to its original size and hope
 for the best.
 
 Thanks for suggestions. Yep, I managed to return back to the
 original size at first. Then I resized it properly (incl. leaving
 few gigs as unused space). e2fsck did spent a while to recheck each
 partition but seems that everything is in order now. We'll see in
 days to come ...

Nice! It is still possible that some of your data was overwritten but if
the fsck didn't find anything troubling you are probably safe now.

Next todo: implement a useful backup strategy. :)

J.
-- 
I frequently find myself at the top of the stairs with absolutely
nothing happening in my brain.
[Agree]   [Disagree]
 http://www.slowlydownward.com/NODATA/data_enter2.html


signature.asc
Description: Digital signature


Re: Resizing LVM issue

2014-06-04 Thread Miroslav Skoric

On 06/01/2014 11:03 PM, emmanuel segura wrote:



i think the correct steps are:

resize2fs /dev/mapper/localhost-home -2G
lvresize --size -2G /dev/mapper/localhost-home




Thank you. I tried with the first command but it did not work (it 
returned an error).


However later I managed to resize the system to its original state, and 
from there to make resizing properly.


M.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: https://lists.debian.org/538f9af6.4070...@eunet.rs



Re: Resizing LVM issue

2014-06-04 Thread Miroslav Skoric

On 06/01/2014 11:36 PM, Jochen Spieker wrote:



If you don't have a
backup you can try to resize the LV again to its original size and hope
for the best.

BTW, I found it to be good practice to initially use less than 100% of
available space on my PVs for the LVs. That way I can grow filesystems
that are too small easily when I need that space.



Thanks for suggestions. Yep, I managed to return back to the original 
size at first. Then I resized it properly (incl. leaving few gigs as 
unused space). e2fsck did spent a while to recheck each partition but 
seems that everything is in order now. We'll see in days to come ...


M.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: https://lists.debian.org/538f9b0f.4020...@eunet.rs



Resizing LVM issue

2014-06-01 Thread Miroslav Skoric

Hi,

I have encrypted LVM on one of my Wheezy machines, and recently noticed 
that /tmp space was too low for one application (In fact it was about 
350 MB and I wanted it to be around 2.5 GB). So I tried to make /tmp 
space bigger while I was mounted and online, but vgdisplay reported no 
free space for that action (something like that):


sys@localhost:~$ sudo vgdisplay
  --- Volume group ---
  VG Name   localhost
  System ID
  Formatlvm2
  Metadata Areas1
  Metadata Sequence No  9
  VG Access read/write
  VG Status resizable
  MAX LV0
  Cur LV6
  Open LV   6
  Max PV0
  Cur PV1
  Act PV1
  VG Size   297.85 GiB
  PE Size   4.00 MiB
  Total PE  76249
  Alloc PE / Size   76249 / 297.85 GiB
  Free  PE / Size   0 / 0
  VG UUID   fbCaw1-u3SN-2HCy-

Then I decided to shrink /home for some 2 gig and to add to /tmp :

sys@localhost:~$ sudo lvresize --size -2G /dev/mapper/localhost-home
sys@localhost:~$ sudo lvresize --size +2G /dev/mapper/localhost-tmp

According to df, it did so:

sys@localhost:~$ df
Filesystem 1K-blocks Used Available Use% Mounted on
rootfs329233   219069 93166  71% /
udev   102400 10240   0% /dev
tmpfs 304560  756303804   1% /run
/dev/mapper/localhost-root329233   219069 93166  71% /
tmpfs   51200  5120   0% /run/lock
tmpfs 609100   80609020   1% /run/shm
/dev/sda1 23319131650189100  15% /boot
/dev/mapper/localhost-home 289312040 11966292 262649508   5% /home
/dev/mapper/localhost-tmp240831511231   2273129   1% /tmp
/dev/mapper/localhost-usr8647944  5745732   2462916  70% /usr
/dev/mapper/localhost-var2882592   916600   1819560  34% /var
sys@localhost:~$

It seems that /dev/mapper/localhost-tmp was about 2.4 GB so I wanted to 
resize newly changed filesystems:


sys@localhost:~$ sudo resize2fs /dev/mapper/localhost-home
resize2fs 1.42.5 (29-Jul-2012)
Filesystem at /dev/mapper/localhost-home is mounted on /home; on-line 
resizing required

resize2fs: On-line shrinking not supported

Similar output was with e2fsck:

sys@localhost:~$ sudo e2fsck -p /dev/mapper/localhost-home
/dev/mapper/localhost-home is mounted.
e2fsck: Cannot continue, aborting.


sys@localhost:~$

Obviously I should not have done that while being mounted (or did not 
know the proper syntax), however it did not complain with resize2fs 
/dev/mapper/localhost-tmp


But after the next reboot, it stopped when tried to perform Checking 
file systems:


/dev/mapper/localhost-home: The filesystem size (according to the 
superblock) is 73481216 blocks

The physical size of the device is 72956928 blocks
Either the superblock or the partition table is likely to be corrupt!

/dev/mapper/localhost-home: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
 (i.e., without -a or -p options)

Anyway, the other segments of the filesystem were clean, so by CONTROL-D 
it was possible to terminate the shell, so to resume system boot.


My question is how to solve that inconsistency issue now? At first I 
tried with dumpe2fs in searching for backup superblocks, then with 
e2fsck -b one_of_those_backup_superblocks_from_the_list, but without 
resolution. I mean the inconsistency is not fixed. Probably I do not use 
e2fsck properly even when /home is not mounted. So the machine still 
keeps stopping during the boot at filesystem check, so I have to 
continue booting by pressing CONTROL-D.


Any suggestion?


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: https://lists.debian.org/538b8663.8050...@eunet.rs



Re: Resizing LVM issue

2014-06-01 Thread emmanuel segura
from man resize2fs

   If you wish to shrink an ext2 partition, first use resize2fs to
shrink the size of filesystem.  Then you may use fdisk(8) to shrink the
size of the partition.  When
   shrinking the size of the partition, make sure you do not make it
smaller than the new size of the ext2 filesystem!

i think the correct steps are:

resize2fs /dev/mapper/localhost-home -2G
lvresize --size -2G /dev/mapper/localhost-home




2014-06-01 22:00 GMT+02:00 Miroslav Skoric sko...@eunet.rs:

 Hi,

 I have encrypted LVM on one of my Wheezy machines, and recently noticed
 that /tmp space was too low for one application (In fact it was about 350
 MB and I wanted it to be around 2.5 GB). So I tried to make /tmp space
 bigger while I was mounted and online, but vgdisplay reported no free space
 for that action (something like that):

 sys@localhost:~$ sudo vgdisplay
   --- Volume group ---
   VG Name   localhost
   System ID
   Formatlvm2
   Metadata Areas1
   Metadata Sequence No  9
   VG Access read/write
   VG Status resizable
   MAX LV0
   Cur LV6
   Open LV   6
   Max PV0
   Cur PV1
   Act PV1
   VG Size   297.85 GiB
   PE Size   4.00 MiB
   Total PE  76249
   Alloc PE / Size   76249 / 297.85 GiB
   Free  PE / Size   0 / 0
   VG UUID   fbCaw1-u3SN-2HCy-

 Then I decided to shrink /home for some 2 gig and to add to /tmp :

 sys@localhost:~$ sudo lvresize --size -2G /dev/mapper/localhost-home
 sys@localhost:~$ sudo lvresize --size +2G /dev/mapper/localhost-tmp

 According to df, it did so:

 sys@localhost:~$ df
 Filesystem 1K-blocks Used Available Use% Mounted on
 rootfs329233   219069 93166  71% /
 udev   102400 10240   0% /dev
 tmpfs 304560  756303804   1% /run
 /dev/mapper/localhost-root329233   219069 93166  71% /
 tmpfs   51200  5120   0% /run/lock
 tmpfs 609100   80609020   1% /run/shm
 /dev/sda1 23319131650189100  15% /boot
 /dev/mapper/localhost-home 289312040 11966292 262649508   5% /home
 /dev/mapper/localhost-tmp240831511231   2273129   1% /tmp
 /dev/mapper/localhost-usr8647944  5745732   2462916  70% /usr
 /dev/mapper/localhost-var2882592   916600   1819560  34% /var
 sys@localhost:~$

 It seems that /dev/mapper/localhost-tmp was about 2.4 GB so I wanted to
 resize newly changed filesystems:

 sys@localhost:~$ sudo resize2fs /dev/mapper/localhost-home
 resize2fs 1.42.5 (29-Jul-2012)
 Filesystem at /dev/mapper/localhost-home is mounted on /home; on-line
 resizing required
 resize2fs: On-line shrinking not supported

 Similar output was with e2fsck:

 sys@localhost:~$ sudo e2fsck -p /dev/mapper/localhost-home
 /dev/mapper/localhost-home is mounted.
 e2fsck: Cannot continue, aborting.


 sys@localhost:~$

 Obviously I should not have done that while being mounted (or did not know
 the proper syntax), however it did not complain with resize2fs
 /dev/mapper/localhost-tmp

 But after the next reboot, it stopped when tried to perform Checking file
 systems:

 /dev/mapper/localhost-home: The filesystem size (according to the
 superblock) is 73481216 blocks
 The physical size of the device is 72956928 blocks
 Either the superblock or the partition table is likely to be corrupt!

 /dev/mapper/localhost-home: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
  (i.e., without -a or -p options)

 Anyway, the other segments of the filesystem were clean, so by CONTROL-D
 it was possible to terminate the shell, so to resume system boot.

 My question is how to solve that inconsistency issue now? At first I tried
 with dumpe2fs in searching for backup superblocks, then with e2fsck -b
 one_of_those_backup_superblocks_from_the_list, but without resolution.
 I mean the inconsistency is not fixed. Probably I do not use e2fsck
 properly even when /home is not mounted. So the machine still keeps
 stopping during the boot at filesystem check, so I have to continue booting
 by pressing CONTROL-D.

 Any suggestion?


 --
 To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a
 subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
 Archive: https://lists.debian.org/538b8663.8050...@eunet.rs




-- 
esta es mi vida e me la vivo hasta que dios quiera


Re: Resizing LVM issue

2014-06-01 Thread Jochen Spieker
Miroslav Skoric:
 
 sys@localhost:~$ sudo lvresize --size -2G /dev/mapper/localhost-home
…
 sys@localhost:~$ sudo resize2fs /dev/mapper/localhost-home

You did it the wrong way round. You have to shrink from top to bottom:
first the filesystem, then the LV (and then possibly the physical volume
followed by the partition).

It is hard to know whether your system already overwrote data previously
in /home after you cut a part off of it and gave that to /tmp/. If that
had happened to me, I would restore from backup. If you don't have a
backup you can try to resize the LV again to its original size and hope
for the best.

BTW, I found it to be good practice to initially use less than 100% of
available space on my PVs for the LVs. That way I can grow filesystems
that are too small easily when I need that space.

J.
-- 
I am very intolerant with other drivers.
[Agree]   [Disagree]
 http://www.slowlydownward.com/NODATA/data_enter2.html


signature.asc
Description: Digital signature


open box ignores update windows while resizing check box

2011-01-19 Thread briand
the rc file looks like it has the right thing:

  resize
drawContentsno/drawContents
popupShowNonPixel/popupShow
!-- 'Always', 'Never', or 'Nonpixel' (xterms and such) --
popupPositionCenter/popupPosition
!-- 'Center', 'Top', or 'Fixed' --
popupFixedPosition
  !-- these are used if popupPosition is set to 'Fixed' --
  x10/x
  !-- positive number for distance from left edge, negative number
  for distance from right edge, or 'Center' --
  y10/y
  !-- positive number for distance from top edge, negative number
  for distance from bottom edge, or 'Center' --
/popupFixedPosition
  /resize

This should be purely a window manager function, right ?

Nothing that would enable this in X ?

Brian


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110119212408.4f173...@bamboo.deldotd.com



Re: Resizing Raid 1 partitions

2010-05-21 Thread Alex Samad
On Wed, 2010-05-19 at 13:32 -0600, Aaron Toponce wrote: 
 On 05/19/2010 12:47 PM, Erwan David wrote:
  Hi,
[snip]

 I would personally recommend backing up your data, and reinstalling,
 with LVM on top of your software RAID. You still have the redundancy,
 and you have the awesome flexibility of resizing volumes with great ease.

ditto

 




-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1274332912.28864.1.ca...@alex-mini.samad.com.au



Re: Resizing Raid 1 partitions

2010-05-20 Thread Erwan David
Aaron Toponce wrote:
 On 05/19/2010 12:47 PM, Erwan David wrote:
  Hi,

 I have a setup with 2 disks and following raid setting:

 sda1+sdb1 - md0, /
 sda2+sdb2 - md1 swap
 sda3+sdb3 - md2 /home

 I'd like to resize partitions to get more space on md2 and less on md0.  

 What would bea good way to achieve this ?
 
 The best way to acheive this would be to use LVM above your software
 RAID. At that point, it would be very painless, compared to what is
 ahead of you now. If you're curious, here would be the steps:
 
 mdadm -C /dev/md0 -n 2 -l 1 -a yes /dev/sd{a,b}
 pvcreate /dev/md0
 vgcreate home /dev/md0
 lvcreate -L 1G -n swap home
 lvcreate -L 10G -n root home
 lvcreate -l 100%FREE -n home home
 
 Then, for giving more space to home, and less to root, boot off a live
 CD, and (assuming you're using ext3/4):
 
 e2fsck /dev/home/root
 resize2fs /dev/home/root 6G
 lvreduce -L 6G /dev/home/root
 lvextend -L +4G /dev/home/home
 resize2fs /dev/home/home
 
 That's it! However, because you chose not to use LVM, you will need to
 boot of a live CD that supports Linux software RAID, rebuild the array,
 and perform the resizing there. I'm not sure if GParted supports this or
 not. Worth checking out, however.
 
 I would personally recommend backing up your data, and reinstalling,
 with LVM on top of your software RAID. You still have the redundancy,
 and you have the awesome flexibility of resizing volumes with great ease.
 

Thanks. But I would have preferred avoiding full reinstall...

I found this tutorial
http://www.howtoforge.com/how-to-resize-raid-partitions-shrink-and-grow-software-raid

I will try first.

-- 
Erwan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4bf4eb53.9060...@rail.eu.org



Resizing Raid 1 partitions

2010-05-19 Thread Erwan David
Hi,

I have a setup with 2 disks and following raid setting:

sda1+sdb1 - md0, /
sda2+sdb2 - md1 swap
sda3+sdb3 - md2 /home

I'd like to resize partitions to get more space on md2 and less on md0. 

What would bea good way to achieve this ?

I know that libparted does not handle raid partitions, so I was thinking to 
remove sdb partitions from the raid first,
resize them
then, I wonder if I may resize sda partitions. Will they be recognized as raid 
partition after being resized ? Or should I think some more magic ?

Thank you.

-- 
Erwan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20100519184719.ga6...@rail.eu.org



Re: Resizing Raid 1 partitions

2010-05-19 Thread Aaron Toponce
On 05/19/2010 12:47 PM, Erwan David wrote:
   Hi,
 
 I have a setup with 2 disks and following raid setting:
 
 sda1+sdb1 - md0, /
 sda2+sdb2 - md1 swap
 sda3+sdb3 - md2 /home
 
 I'd like to resize partitions to get more space on md2 and less on md0.   
 
 What would bea good way to achieve this ?

The best way to acheive this would be to use LVM above your software
RAID. At that point, it would be very painless, compared to what is
ahead of you now. If you're curious, here would be the steps:

mdadm -C /dev/md0 -n 2 -l 1 -a yes /dev/sd{a,b}
pvcreate /dev/md0
vgcreate home /dev/md0
lvcreate -L 1G -n swap home
lvcreate -L 10G -n root home
lvcreate -l 100%FREE -n home home

Then, for giving more space to home, and less to root, boot off a live
CD, and (assuming you're using ext3/4):

e2fsck /dev/home/root
resize2fs /dev/home/root 6G
lvreduce -L 6G /dev/home/root
lvextend -L +4G /dev/home/home
resize2fs /dev/home/home

That's it! However, because you chose not to use LVM, you will need to
boot of a live CD that supports Linux software RAID, rebuild the array,
and perform the resizing there. I'm not sure if GParted supports this or
not. Worth checking out, however.

I would personally recommend backing up your data, and reinstalling,
with LVM on top of your software RAID. You still have the redundancy,
and you have the awesome flexibility of resizing volumes with great ease.

-- 
. O .   O . O   . . O   O . .   . O .
. . O   . O O   O . O   . O O   . . O
O O O   . O .   . O O   O O .   O O O



signature.asc
Description: OpenPGP digital signature


help resizing a partition on GPT disk

2010-04-25 Thread Alexander Samad
Hi

I have built a 5T partition with my raid card and just expanded it out
to 9G, now I want to resize the partition I had to use parted so that
I can use gpt partitions. Every time I go to resize partd tells me it
doesn't know what fs ison the partition - its a pv.  so I am sort of
stuck ...


Never really used gpt before.

Thanks
Alex


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/x2n836a6dcf1004242349of3f5ic803a5a91f9f4...@mail.gmail.com



Re: help resizing a partition on GPT disk

2010-04-25 Thread Alexander Samad
did the brave thing rm the partition and recreated it :)

On Sun, Apr 25, 2010 at 4:49 PM, Alexander Samad a...@samad.com.au wrote:
 Hi

 I have built a 5T partition with my raid card and just expanded it out
 to 9G, now I want to resize the partition I had to use parted so that
 I can use gpt partitions. Every time I go to resize partd tells me it
 doesn't know what fs ison the partition - its a pv.  so I am sort of
 stuck ...


 Never really used gpt before.

 Thanks
 Alex



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/o2i836a6dcf1004250418jf045978aqab09f2b7831f2...@mail.gmail.com



Re: Resizing partitions on production environment

2009-07-11 Thread Rob Owens
On Wed, Jul 08, 2009 at 10:42:14AM +0500, Daniel Suleyman wrote:
 Dear all, i have big issues.
 I've installed lenny on my server, installed all programs and ran it on
 production, ut now i need to install oracle on it.
 My artitions have sizes
 user:~# df -kh
 Filesystem Size Used Avail Use% Mounted on
 /dev/cciss/c0d0p1 327M 141M 170M 46% /
 tmpfs 5.9G 0 5.9G 0% /lib/init/rw
 udev 10M 104K 9.9M 2% /dev
 tmpfs 5.9G 0 5.9G 0% /dev/shm
 /dev/cciss/c0d0p9 80G 42G 35G 56% /home
 /dev/cciss/c0d0p8 373M 11M 343M 3% /tmp
 /dev/cciss/c0d0p5 4.6G 1.6G 2.9G 36% /usr
 /dev/cciss/c0d0p6 2.8G 1.2G 1.5G 45% /var
 
 and oracle wont install becouse my root partition have less size that oracle
 need. how I an resize my partitions on fly? or at least from livecd but with
 guaranteed no data losses.
 Thank you in advance,
 Daniel

I've done this in the past when in a bind:

If oracle wants to install to /opt/oracle, mkdir /home/oracle and then ln -s 
/home/oracle /opt/oracle

This will put the oracle stuff in the /home partition.

-Rob


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Resizing partitions on production environment

2009-07-08 Thread Csanyi Pal
Daniel Suleyman danik...@gmail.com writes:

 Dear all, i have big issues.
 I've installed lenny on my server, installed all programs and ran it
 on production, ut now i need to install oracle on it.

 and oracle wont install becouse my root partition have less size
 that oracle need. how I an resize my partitions on fly? or at least
 from livecd but with guaranteed no data losses.

You can try gparted.

http://gparted.sourceforge.net

You can download GParted Live on CD there too.

--
Regards, Paul Chany


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Resizing partitions on production environment

2009-07-08 Thread lee
On Wed, Jul 08, 2009 at 10:42:14AM +0500, Daniel Suleyman wrote:

 and oracle wont install becouse my root partition have less size
 that oracle need. how I an resize my partitions on fly? or at least
 from livecd but with guaranteed no data losses.  Thank you in
 advance, Daniel

In any case, make a backup before you try anything. I probably won't
try but repartition the disk(s) or add another disk and move / to the
new disk. When you have the needed backup, you can as well
repartition.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Resizing partitions on production environment

2009-07-08 Thread Noah Dain
On Wed, Jul 8, 2009 at 3:38 PM, leel...@yun.yagibdah.de wrote:
 On Wed, Jul 08, 2009 at 10:42:14AM +0500, Daniel Suleyman wrote:

 and oracle wont install becouse my root partition have less size
 that oracle need. how I an resize my partitions on fly? or at least
 from livecd but with guaranteed no data losses.  Thank you in
 advance, Daniel

Oracle *probably* just wants to install to /opt.  If it's possible,
you could create a new filesystem and mount it as /opt.


 In any case, make a backup before you try anything. I probably won't
 try but repartition the disk(s) or add another disk and move / to the
 new disk. When you have the needed backup, you can as well
 repartition.

++

 --
 To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
 with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org





-- 
Noah Dain
The beatings will continue, until morale improves - the Management


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Resizing partitions on production environment

2009-07-08 Thread Daniel Suleyman
ok, Ihave hardware raid, it make my life dificult or it will be transparent
to the OS?
I am installing oracle-xe-universal from sources.list it give me error no
spce for /etc/dp?? (don't remember dir)
I changed dpkg options setting instdir to /home/  but after oracle package
outputed error cant find pre install script and post install script :(

2009/7/9 Noah Dain noahd...@gmail.com

 On Wed, Jul 8, 2009 at 3:38 PM, leel...@yun.yagibdah.de wrote:
  On Wed, Jul 08, 2009 at 10:42:14AM +0500, Daniel Suleyman wrote:
 
  and oracle wont install becouse my root partition have less size
  that oracle need. how I an resize my partitions on fly? or at least
  from livecd but with guaranteed no data losses.  Thank you in
  advance, Daniel

 Oracle *probably* just wants to install to /opt.  If it's possible,
 you could create a new filesystem and mount it as /opt.

 
  In any case, make a backup before you try anything. I probably won't
  try but repartition the disk(s) or add another disk and move / to the
  new disk. When you have the needed backup, you can as well
  repartition.

 ++

  --
  To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
  with a subject of unsubscribe. Trouble? Contact
 listmas...@lists.debian.org
 
 



 --
 Noah Dain
 The beatings will continue, until morale improves - the Management


 --
 To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
 with a subject of unsubscribe. Trouble? Contact
 listmas...@lists.debian.org




  1   2   3   >