Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-28 Thread Jim Klimov via openindiana-discuss
On February 28, 2021 9:18:09 PM UTC, Stephan Althaus 
 wrote:
>On 02/26/21 09:07 PM, Andreas Wacknitz wrote:
>> Am 23.02.21 um 08:00 schrieb Stephan Althaus:
>>> On 02/23/21 12:13 AM, Tim Mooney via openindiana-discuss wrote:
>>>> In regard to: Re: [OpenIndiana-discuss] safely cleanup pkg cache?, 
>>>> Andreas...:
>>>>
>>>>> Am 21.02.21 um 22:42 schrieb Stephan Althaus:
>>>>>> Hello!
>>>>>>
>>>>>> The "-s" option does the minimal obvious remove of the
>corresponding
>>>>>> snapshot:
>>>>
>>>> My experience seems to match what Andreas and Toomas are saying: -s
>
>>>> isn't
>>>> doing what it's supposed to be doing (?).
>>>>
>>>> After using
>>>>
>>>> sudo beadm destroy -F -s -v 
>>>>
>>>> to destroy a dozen or so boot environments, I'm down to just this
>>>> for boot environments:
>>>>
>>>> $ beadm list
>>>> BE    Active Mountpoint Space Policy 
>>>> Created
>>>> openindiana   -  -  12.05M static 
>>>> 2019-05-17 10:37
>>>> openindiana-2021:02:07    -  -  27.27M static 
>>>> 2021-02-07 01:01
>>>> openindiana-2021:02:07-backup-1   -  -  117K static 
>>>> 2021-02-07 13:06
>>>> openindiana-2021:02:07-backup-2   -  -  117K static 
>>>> 2021-02-07 13:08
>>>> openindiana-2021:02:07-1  NR /  51.90G static 
>>>> 2021-02-07 17:23
>>>> openindiana-2021:02:07-1-backup-1 -  -  186K static 
>>>> 2021-02-07 17:48
>>>> openindiana-2021:02:07-1-backup-2 -  -  665K static 
>>>> 2021-02-07 17:58
>>>> openindiana-2021:02:07-1-backup-3 -  -  666K static 
>>>> 2021-02-07 18:02
>>>>
>>>>
>>>> However, zfs list still shows (I think) snapshots for some of the
>>>> intermediate boot environments that I destroyed:
>>>>
>>>> $ zfs list -t snapshot
>>>> NAME  USED 
>>>> AVAIL  REFER  MOUNTPOINT
>>>> rpool/ROOT/openindiana-2021:02:07-1@install 559M  -  5.94G -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:34:55 472M 
>-  
>>>> 6.28G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:46:32 555K 
>-  
>>>> 6.28G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:48:56 2.18M  
>>>> -  6.45G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-06-13-22:13:18 1015M  
>>>> -  9.74G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-06-21-16:25:04 1.21G  
>>>> -  9.85G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-08-23-16:17:28 833M 
>-  
>>>> 9.74G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-08-28-21:51:55 1.40G  
>>>> -  10.8G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-09-12-23:35:08 643M 
>-  
>>>> 11.7G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-10-02-22:55:57 660M 
>-  
>>>> 12.0G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-11-09-00:04:17 736M 
>-  
>>>> 12.4G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-12-05-01:02:10 1.02G  
>>>> -  12.7G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2019-12-20-19:55:51 788M 
>-  
>>>> 12.9G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2020-02-13-23:17:35 918M 
>-  
>>>> 13.3G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2021-01-21-02:27:31 1.74G  
>>>> -  13.9G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2021-02-06-22:47:15 1.71G  
>>>> -  18.8G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-06:59:02 1.22G  
>>>> -  19.1G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-19:06:07 280M 
>-  
>>>> 19.3G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-19:08:29 280M 
>-  
>>>> 19.3G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:21:52 640K 
>-  
>>>> 19.1G  -
>>>> rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:23:46 868K 
>-  
>>>> 19.2G  -
>>>

Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-28 Thread Stephan Althaus

On 02/26/21 09:07 PM, Andreas Wacknitz wrote:

Am 23.02.21 um 08:00 schrieb Stephan Althaus:

On 02/23/21 12:13 AM, Tim Mooney via openindiana-discuss wrote:
In regard to: Re: [OpenIndiana-discuss] safely cleanup pkg cache?, 
Andreas...:



Am 21.02.21 um 22:42 schrieb Stephan Althaus:

Hello!

The "-s" option does the minimal obvious remove of the corresponding
snapshot:


My experience seems to match what Andreas and Toomas are saying: -s 
isn't

doing what it's supposed to be doing (?).

After using

sudo beadm destroy -F -s -v 

to destroy a dozen or so boot environments, I'm down to just this
for boot environments:

$ beadm list
BE    Active Mountpoint Space Policy 
Created
openindiana   -  -  12.05M static 
2019-05-17 10:37
openindiana-2021:02:07    -  -  27.27M static 
2021-02-07 01:01
openindiana-2021:02:07-backup-1   -  -  117K static 
2021-02-07 13:06
openindiana-2021:02:07-backup-2   -  -  117K static 
2021-02-07 13:08
openindiana-2021:02:07-1  NR /  51.90G static 
2021-02-07 17:23
openindiana-2021:02:07-1-backup-1 -  -  186K static 
2021-02-07 17:48
openindiana-2021:02:07-1-backup-2 -  -  665K static 
2021-02-07 17:58
openindiana-2021:02:07-1-backup-3 -  -  666K static 
2021-02-07 18:02



However, zfs list still shows (I think) snapshots for some of the
intermediate boot environments that I destroyed:

$ zfs list -t snapshot
NAME  USED 
AVAIL  REFER  MOUNTPOINT

rpool/ROOT/openindiana-2021:02:07-1@install 559M  -  5.94G -
rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:34:55 472M  -  
6.28G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:46:32 555K  -  
6.28G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:48:56 2.18M  
-  6.45G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-06-13-22:13:18 1015M  
-  9.74G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-06-21-16:25:04 1.21G  
-  9.85G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-08-23-16:17:28 833M  -  
9.74G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-08-28-21:51:55 1.40G  
-  10.8G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-09-12-23:35:08 643M  -  
11.7G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-10-02-22:55:57 660M  -  
12.0G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-11-09-00:04:17 736M  -  
12.4G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-12-05-01:02:10 1.02G  
-  12.7G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-12-20-19:55:51 788M  -  
12.9G  -
rpool/ROOT/openindiana-2021:02:07-1@2020-02-13-23:17:35 918M  -  
13.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-01-21-02:27:31 1.74G  
-  13.9G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-06-22:47:15 1.71G  
-  18.8G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-06:59:02 1.22G  
-  19.1G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-19:06:07 280M  -  
19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-19:08:29 280M  -  
19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:21:52 640K  -  
19.1G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:23:46 868K  -  
19.2G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:48:07 294M  -  
19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:58:44 280M  -  
19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-08-00:02:17 280M  -  
19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-21-06:24:56 3.49M  
-  19.4G  -


Now I have to figure out how to map the zfs snapshots to the boot
environments that I kept, so that I can "weed out" the zfs snapshots
that I don't need.

I appreciate all the discussion and info my question has spawned! I
didn't anticipate the issue being as complicated as it appears it is.

Tim


Hello!

"beadm -s " destroys snapshots.

"rpool/ROOT/openindiana-2021:02:07-1" is the filesystem of the 
current BE.


i don't know why these snapshots are in there,
but these are left there from the "pkg upgrade" somehow.

I don't think that "beadm -s" is to blame here.

Maybe an additional Parameter would be nice to get rid of old 
snaphots within the BE-filesystem(s).


Greetings,

Stephan


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss

Hi,

I think I hit the bug again, even when using beadm destroy -s

╰─➤  zfs list -t snapshot
NAME USED 
AVAIL  REFER  MOUNTPOINT

rpool1/ROOT/openindiana-2021:02:26@2021-02-22-16:33:39 489M -  26.5G  -
rpool1/ROOT/openindiana-2021:02:26@2021-02-24-12:32:24 472M -  26.5G  
-                            <- only one snapshop here from Feb. 24th

rpool1/ROOT/openindiana-2021:02:26@2021-02-25-13:03:15 0  - 26

Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-26 Thread Andreas Wacknitz

Am 23.02.21 um 08:00 schrieb Stephan Althaus:

On 02/23/21 12:13 AM, Tim Mooney via openindiana-discuss wrote:
In regard to: Re: [OpenIndiana-discuss] safely cleanup pkg cache?, 
Andreas...:



Am 21.02.21 um 22:42 schrieb Stephan Althaus:

Hello!

The "-s" option does the minimal obvious remove of the corresponding
snapshot:


My experience seems to match what Andreas and Toomas are saying: -s 
isn't

doing what it's supposed to be doing (?).

After using

sudo beadm destroy -F -s -v 

to destroy a dozen or so boot environments, I'm down to just this
for boot environments:

$ beadm list
BE    Active Mountpoint Space Policy Created
openindiana   -  -  12.05M static 
2019-05-17 10:37
openindiana-2021:02:07    -  -  27.27M static 
2021-02-07 01:01
openindiana-2021:02:07-backup-1   -  -  117K static 
2021-02-07 13:06
openindiana-2021:02:07-backup-2   -  -  117K static 
2021-02-07 13:08
openindiana-2021:02:07-1  NR /  51.90G static 
2021-02-07 17:23
openindiana-2021:02:07-1-backup-1 -  -  186K static 
2021-02-07 17:48
openindiana-2021:02:07-1-backup-2 -  -  665K static 
2021-02-07 17:58
openindiana-2021:02:07-1-backup-3 -  -  666K static 
2021-02-07 18:02



However, zfs list still shows (I think) snapshots for some of the
intermediate boot environments that I destroyed:

$ zfs list -t snapshot
NAME  USED AVAIL  
REFER  MOUNTPOINT

rpool/ROOT/openindiana-2021:02:07-1@install 559M  -  5.94G -
rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:34:55 472M  -  
6.28G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:46:32 555K  -  
6.28G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:48:56 2.18M  -  
6.45G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-06-13-22:13:18 1015M  -  
9.74G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-06-21-16:25:04 1.21G  -  
9.85G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-08-23-16:17:28 833M  -  
9.74G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-08-28-21:51:55 1.40G  -  
10.8G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-09-12-23:35:08 643M  -  
11.7G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-10-02-22:55:57 660M  -  
12.0G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-11-09-00:04:17 736M  -  
12.4G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-12-05-01:02:10 1.02G  -  
12.7G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-12-20-19:55:51 788M  -  
12.9G  -
rpool/ROOT/openindiana-2021:02:07-1@2020-02-13-23:17:35 918M  -  
13.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-01-21-02:27:31 1.74G  -  
13.9G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-06-22:47:15 1.71G  -  
18.8G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-06:59:02 1.22G  -  
19.1G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-19:06:07 280M  -  
19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-19:08:29 280M  -  
19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:21:52 640K  -  
19.1G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:23:46 868K  -  
19.2G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:48:07 294M  -  
19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:58:44 280M  -  
19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-08-00:02:17 280M  -  
19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-21-06:24:56 3.49M  -  
19.4G  -


Now I have to figure out how to map the zfs snapshots to the boot
environments that I kept, so that I can "weed out" the zfs snapshots
that I don't need.

I appreciate all the discussion and info my question has spawned! I
didn't anticipate the issue being as complicated as it appears it is.

Tim


Hello!

"beadm -s " destroys snapshots.

"rpool/ROOT/openindiana-2021:02:07-1" is the filesystem of the current 
BE.


i don't know why these snapshots are in there,
but these are left there from the "pkg upgrade" somehow.

I don't think that "beadm -s" is to blame here.

Maybe an additional Parameter would be nice to get rid of old snaphots 
within the BE-filesystem(s).


Greetings,

Stephan


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss

Hi,

I think I hit the bug again, even when using beadm destroy -s

╰─➤  zfs list -t snapshot
NAME USED AVAIL  
REFER  MOUNTPOINT

rpool1/ROOT/openindiana-2021:02:26@2021-02-22-16:33:39 489M  -  26.5G  -
rpool1/ROOT/openindiana-2021:02:26@2021-02-24-12:32:24 472M  -  
26.5G  -                            <- only one snapshop here from Feb. 24th

rpool1/ROOT/openindiana-2021:02:26@2021-02-25-13:03:15 0  -  26.5G  -
rpool1/ROOT/openindiana-2021:02:26

Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-22 Thread Stephan Althaus

On 02/23/21 12:13 AM, Tim Mooney via openindiana-discuss wrote:
In regard to: Re: [OpenIndiana-discuss] safely cleanup pkg cache?, 
Andreas...:



Am 21.02.21 um 22:42 schrieb Stephan Althaus:

Hello!

The "-s" option does the minimal obvious remove of the corresponding
snapshot:


My experience seems to match what Andreas and Toomas are saying: -s isn't
doing what it's supposed to be doing (?).

After using

sudo beadm destroy -F -s -v 

to destroy a dozen or so boot environments, I'm down to just this
for boot environments:

$ beadm list
BE    Active Mountpoint Space  Policy Created
openindiana   -  -  12.05M static 
2019-05-17 10:37
openindiana-2021:02:07    -  -  27.27M static 
2021-02-07 01:01
openindiana-2021:02:07-backup-1   -  -  117K   static 
2021-02-07 13:06
openindiana-2021:02:07-backup-2   -  -  117K   static 
2021-02-07 13:08
openindiana-2021:02:07-1  NR /  51.90G static 
2021-02-07 17:23
openindiana-2021:02:07-1-backup-1 -  -  186K   static 
2021-02-07 17:48
openindiana-2021:02:07-1-backup-2 -  -  665K   static 
2021-02-07 17:58
openindiana-2021:02:07-1-backup-3 -  -  666K   static 
2021-02-07 18:02



However, zfs list still shows (I think) snapshots for some of the
intermediate boot environments that I destroyed:

$ zfs list -t snapshot
NAME  USED AVAIL  
REFER  MOUNTPOINT

rpool/ROOT/openindiana-2021:02:07-1@install 559M  -  5.94G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:34:55 472M  -  
6.28G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:46:32 555K  -  
6.28G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:48:56 2.18M  -  
6.45G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-06-13-22:13:18 1015M  -  
9.74G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-06-21-16:25:04 1.21G  -  
9.85G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-08-23-16:17:28 833M  -  
9.74G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-08-28-21:51:55 1.40G  -  
10.8G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-09-12-23:35:08 643M  -  
11.7G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-10-02-22:55:57 660M  -  
12.0G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-11-09-00:04:17 736M  -  
12.4G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-12-05-01:02:10 1.02G  -  
12.7G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-12-20-19:55:51 788M  -  
12.9G  -
rpool/ROOT/openindiana-2021:02:07-1@2020-02-13-23:17:35 918M  -  
13.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-01-21-02:27:31 1.74G  -  
13.9G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-06-22:47:15 1.71G  -  
18.8G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-06:59:02 1.22G  -  
19.1G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-19:06:07 280M  -  
19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-19:08:29 280M  -  
19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:21:52 640K  -  
19.1G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:23:46 868K  -  
19.2G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:48:07 294M  -  
19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:58:44 280M  -  
19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-08-00:02:17 280M  -  
19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-21-06:24:56 3.49M  -  
19.4G  -


Now I have to figure out how to map the zfs snapshots to the boot
environments that I kept, so that I can "weed out" the zfs snapshots
that I don't need.

I appreciate all the discussion and info my question has spawned! I
didn't anticipate the issue being as complicated as it appears it is.

Tim


Hello!

"beadm -s " destroys snapshots.

"rpool/ROOT/openindiana-2021:02:07-1" is the filesystem of the current BE.

i don't know why these snapshots are in there,
but these are left there from the "pkg upgrade" somehow.

I don't think that "beadm -s" is to blame here.

Maybe an additional Parameter would be nice to get rid of old snaphots 
within the BE-filesystem(s).


Greetings,

Stephan


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-22 Thread Tim Mooney via openindiana-discuss

In regard to: Re: [OpenIndiana-discuss] safely cleanup pkg cache?, Andreas...:


Am 21.02.21 um 22:42 schrieb Stephan Althaus:

Hello!

The "-s" option does the minimal obvious remove of the corresponding
snapshot:


My experience seems to match what Andreas and Toomas are saying: -s isn't
doing what it's supposed to be doing (?).

After using

sudo beadm destroy -F -s -v 

to destroy a dozen or so boot environments, I'm down to just this
for boot environments:

$ beadm list
BEActive Mountpoint Space  Policy Created
openindiana   -  -  12.05M static 2019-05-17 
10:37
openindiana-2021:02:07-  -  27.27M static 2021-02-07 
01:01
openindiana-2021:02:07-backup-1   -  -  117K   static 2021-02-07 
13:06
openindiana-2021:02:07-backup-2   -  -  117K   static 2021-02-07 
13:08
openindiana-2021:02:07-1  NR /  51.90G static 2021-02-07 
17:23
openindiana-2021:02:07-1-backup-1 -  -  186K   static 2021-02-07 
17:48
openindiana-2021:02:07-1-backup-2 -  -  665K   static 2021-02-07 
17:58
openindiana-2021:02:07-1-backup-3 -  -  666K   static 2021-02-07 
18:02


However, zfs list still shows (I think) snapshots for some of the
intermediate boot environments that I destroyed:

$ zfs list -t snapshot
NAME  USED  AVAIL  REFER  
MOUNTPOINT
rpool/ROOT/openindiana-2021:02:07-1@install   559M  -  5.94G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:34:55   472M  -  6.28G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:46:32   555K  -  6.28G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:48:56  2.18M  -  6.45G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-06-13-22:13:18  1015M  -  9.74G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-06-21-16:25:04  1.21G  -  9.85G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-08-23-16:17:28   833M  -  9.74G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-08-28-21:51:55  1.40G  -  10.8G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-09-12-23:35:08   643M  -  11.7G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-10-02-22:55:57   660M  -  12.0G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-11-09-00:04:17   736M  -  12.4G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-12-05-01:02:10  1.02G  -  12.7G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-12-20-19:55:51   788M  -  12.9G  -
rpool/ROOT/openindiana-2021:02:07-1@2020-02-13-23:17:35   918M  -  13.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-01-21-02:27:31  1.74G  -  13.9G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-06-22:47:15  1.71G  -  18.8G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-06:59:02  1.22G  -  19.1G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-19:06:07   280M  -  19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-19:08:29   280M  -  19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:21:52   640K  -  19.1G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:23:46   868K  -  19.2G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:48:07   294M  -  19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:58:44   280M  -  19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-08-00:02:17   280M  -  19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-21-06:24:56  3.49M  -  19.4G  -

Now I have to figure out how to map the zfs snapshots to the boot
environments that I kept, so that I can "weed out" the zfs snapshots
that I don't need.

I appreciate all the discussion and info my question has spawned!  I
didn't anticipate the issue being as complicated as it appears it is.

Tim
--
Tim Mooney tim.moo...@ndsu.edu
Enterprise Computing & Infrastructure /
Division of Information Technology/701-231-1076 (Voice)
North Dakota State University, Fargo, ND 58105-5164

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-22 Thread Andreas Wacknitz

Am 21.02.21 um 22:42 schrieb Stephan Althaus:

Hello!

The "-s" option does the minimal obvious remove of the corresponding
snapshot:

$ beadm list
BE    Active Mountpoint Space   Policy
Created
openindiana-2020:11:03    -  - 42.08M  static 2020-11-03
09:30
openindiana-2020:11:26    -  - 40.50M  static 2020-11-26
13:52
openindiana-2020:11:26-backup-1   -  - 263K    static 2020-12-11
22:27
openindiana-2020:12:29    -  - 34.60M  static 2020-12-29
22:07
openindiana-2021:01:13    -  - 34.68M  static 2021-01-13
21:57
openindiana-2021:02:18    -  - 409.54M static 2021-02-18
22:31
openindiana-2021:02:18-backup-1   -  - 42.21M  static 2021-02-19
13:35
openindiana-2021:02:20    -  - 42.67M  static 2021-02-20
20:52
openindiana-2021:02:20-1  NR / 166.94G static 2021-02-20
21:22
openindiana-2021:02:20-1-backup-1 -  - 261K    static 2021-02-20
21:30
$ zfs list -t all -r rpool|grep "2020:11:03"
rpool/ROOT/openindiana-2020:11:03 42.1M  5.40G  36.4G  /
$ sudo beadm destroy -s openindiana-2020:11:03
Are you sure you want to destroy openindiana-2020:11:03?
This action cannot be undone (y/[n]): y
Destroyed successfully
$ zfs list -t all -r rpool|grep "2020:11:03"
$

Which facts am i missing here ?


Sorry, I was afk when I wrote my answer. It was just from memory. I had
tested with the -s option before and IIRC had similar problems.
I will thorougly re-test when time permits.

Regards,
Andreas


Greetings,
Stephan

On 02/21/21 10:03 PM, Andreas Wacknitz wrote:

That doesn‘t work correctly either.

Von meinem iPhone gesendet


Am 21.02.2021 um 21:43 schrieb Stephan Althaus
:

On 02/21/21 09:17 AM, Andreas Wacknitz wrote:

Am 21.02.21 um 09:10 schrieb Toomas Soome via openindiana-discuss:


On 21. Feb 2021, at 08:45, Tim Mooney via openindiana-discuss
 wrote:


All-

My space-constrained OI hipster build VM is running low on space.

It looks like either pkg caching or pkg history is using quite a
lot of
space:

$ pfexec du -ks /var/pkg/* | sort -n
0   /var/pkg/gui_cache
0   /var/pkg/lock
0   /var/pkg/modified
0   /var/pkg/ssl
6   /var/pkg/pkg5.image
955 /var/pkg/lost+found
5557    /var/pkg/history
23086   /var/pkg/license
203166  /var/pkg/cache
241106  /var/pkg/state
9271692 /var/pkg/publisher

What is the correct, safe way to clean up anything from pkg that
I don't
need?

The closest information I've found is an article from Oracle on
"Minimize
Stored Image Metadata":

https://docs.oracle.com/cd/E53394_01/html/E54739/minvarpkg.html

This suggests changing the 'flush-content-cache-on-success' property
to true (OI defaults to False).

Is that it, or are there other (generally safe) cleanup steps
that I could
take too?  Is 'pkg purge-history' a good idea?


do not forget to check beadm list -a / zfs list -t snapshot

rgds,
toomas


I have a question regarding beadm destroy here:
I do regularly destroy old BEs with "pfexec beadm destroy "
keeping only a handful BEs.
Checking with "zfs list -t snapshot" shows that this won't destroy
most
(all?) related snapshots, eg. it typically frees only some mb.
Thus, my rpool is filling over the time and I have to manually destroy
zfs snapshots that belong to deleted BEs.
Is that an intentional behavior of beadm destroy and is there
something
how I can enhance on my procedure?

Regards,
Andreas

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss

Hello!


I use

beadm destroy -s 

to auto-destroy the corresponding snapshots. See "man beadm"


Greetings,

Stephan



___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss




___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss



___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-21 Thread Richard L. Hamilton


> On Feb 21, 2021, at 16:42, Stephan Althaus  
> wrote:
> 
> Hello!
> 
> The "-s" option does the minimal obvious remove of the corresponding snapshot:


Yes, that does help simplify cleanup; thanks.

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-21 Thread Judah Richardson
I've always wondered this, but never had the context to ask until now:
purely out of curiosity (*not* criticism), why does pkg on OI not support pkg
clean or something similar? Or is there a similar OI pkg command I'm
missing? Across all my desktop OSes, I typically clean my package caches
after each successful update for storage (especially backup) consumption
reasons.

On Sun, Feb 21, 2021 at 3:42 PM Stephan Althaus <
stephan.alth...@duedinghausen.eu> wrote:

> Hello!
>
> The "-s" option does the minimal obvious remove of the corresponding
> snapshot:
>
> $ beadm list
> BEActive Mountpoint Space   Policy Created
> openindiana-2020:11:03-  - 42.08M  static 2020-11-03 09:30
> openindiana-2020:11:26-  - 40.50M  static 2020-11-26 13:52
> openindiana-2020:11:26-backup-1   -  - 263Kstatic 2020-12-11 22:27
> openindiana-2020:12:29-  - 34.60M  static 2020-12-29 22:07
> openindiana-2021:01:13-  - 34.68M  static 2021-01-13 21:57
> openindiana-2021:02:18-  - 409.54M static 2021-02-18 22:31
> openindiana-2021:02:18-backup-1   -  - 42.21M  static 2021-02-19 13:35
> openindiana-2021:02:20-  - 42.67M  static 2021-02-20 20:52
> openindiana-2021:02:20-1  NR / 166.94G static 2021-02-20 21:22
> openindiana-2021:02:20-1-backup-1 -  - 261Kstatic 2021-02-20 21:30
> $ zfs list -t all -r rpool|grep "2020:11:03"
> rpool/ROOT/openindiana-2020:11:03 42.1M  5.40G  36.4G  /
> $ sudo beadm destroy -s openindiana-2020:11:03
> Are you sure you want to destroy openindiana-2020:11:03?
> This action cannot be undone (y/[n]): y
> Destroyed successfully
> $ zfs list -t all -r rpool|grep "2020:11:03"
> $
>
> Which facts am i missing here ?
>
> Greetings,
> Stephan
>
> On 02/21/21 10:03 PM, Andreas Wacknitz wrote:
> > That doesn‘t work correctly either.
> >
> > Von meinem iPhone gesendet
> >
> >> Am 21.02.2021 um 21:43 schrieb Stephan Althaus <
> stephan.alth...@duedinghausen.eu>:
> >>
> >> On 02/21/21 09:17 AM, Andreas Wacknitz wrote:
>  Am 21.02.21 um 09:10 schrieb Toomas Soome via openindiana-discuss:
> 
> > On 21. Feb 2021, at 08:45, Tim Mooney via openindiana-discuss <
> openindiana-discuss@openindiana.org> wrote:
> >
> >
> > All-
> >
> > My space-constrained OI hipster build VM is running low on space.
> >
> > It looks like either pkg caching or pkg history is using quite a lot
> of
> > space:
> >
> > $ pfexec du -ks /var/pkg/* | sort -n
> > 0   /var/pkg/gui_cache
> > 0   /var/pkg/lock
> > 0   /var/pkg/modified
> > 0   /var/pkg/ssl
> > 6   /var/pkg/pkg5.image
> > 955 /var/pkg/lost+found
> > 5557/var/pkg/history
> > 23086   /var/pkg/license
> > 203166  /var/pkg/cache
> > 241106  /var/pkg/state
> > 9271692 /var/pkg/publisher
> >
> > What is the correct, safe way to clean up anything from pkg that I
> don't
> > need?
> >
> > The closest information I've found is an article from Oracle on
> "Minimize
> > Stored Image Metadata":
> >
> >  https://docs.oracle.com/cd/E53394_01/html/E54739/minvarpkg.html
> >
> > This suggests changing the 'flush-content-cache-on-success' property
> > to true (OI defaults to False).
> >
> > Is that it, or are there other (generally safe) cleanup steps that I
> could
> > take too?  Is 'pkg purge-history' a good idea?
> >
>  do not forget to check beadm list -a / zfs list -t snapshot
> 
>  rgds,
>  toomas
> 
> >>> I have a question regarding beadm destroy here:
> >>> I do regularly destroy old BEs with "pfexec beadm destroy "
> >>> keeping only a handful BEs.
> >>> Checking with "zfs list -t snapshot" shows that this won't destroy most
> >>> (all?) related snapshots, eg. it typically frees only some mb.
> >>> Thus, my rpool is filling over the time and I have to manually destroy
> >>> zfs snapshots that belong to deleted BEs.
> >>> Is that an intentional behavior of beadm destroy and is there something
> >>> how I can enhance on my procedure?
> >>>
> >>> Regards,
> >>> Andreas
> >>>
> >>> ___
> >>> openindiana-discuss mailing list
> >>> openindiana-discuss@openindiana.org
> >>> https://openindiana.org/mailman/listinfo/openindiana-discuss
> >> Hello!
> >>
> >>
> >> I use
> >>
> >> beadm destroy -s 
> >>
> >> to auto-destroy the corresponding snapshots. See "man beadm"
> >>
> >>
> >> Greetings,
> >>
> >> Stephan
> >>
> >>
> >>
> >> ___
> >> openindiana-discuss mailing list
> >> openindiana-discuss@openindiana.org
> >> https://openindiana.org/mailman/listinfo/openindiana-discuss
> >
> > ___
> > openindiana-discuss mailing list
> > openindiana-discuss@openindiana.org
> > 

Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-21 Thread Stephan Althaus

Hello!

The "-s" option does the minimal obvious remove of the corresponding 
snapshot:


$ beadm list
BE    Active Mountpoint Space   Policy Created
openindiana-2020:11:03    -  - 42.08M  static 2020-11-03 09:30
openindiana-2020:11:26    -  - 40.50M  static 2020-11-26 13:52
openindiana-2020:11:26-backup-1   -  - 263K    static 2020-12-11 22:27
openindiana-2020:12:29    -  - 34.60M  static 2020-12-29 22:07
openindiana-2021:01:13    -  - 34.68M  static 2021-01-13 21:57
openindiana-2021:02:18    -  - 409.54M static 2021-02-18 22:31
openindiana-2021:02:18-backup-1   -  - 42.21M  static 2021-02-19 13:35
openindiana-2021:02:20    -  - 42.67M  static 2021-02-20 20:52
openindiana-2021:02:20-1  NR / 166.94G static 2021-02-20 21:22
openindiana-2021:02:20-1-backup-1 -  - 261K    static 2021-02-20 21:30
$ zfs list -t all -r rpool|grep "2020:11:03"
rpool/ROOT/openindiana-2020:11:03 42.1M  5.40G  36.4G  /
$ sudo beadm destroy -s openindiana-2020:11:03
Are you sure you want to destroy openindiana-2020:11:03?
This action cannot be undone (y/[n]): y
Destroyed successfully
$ zfs list -t all -r rpool|grep "2020:11:03"
$

Which facts am i missing here ?

Greetings,
Stephan

On 02/21/21 10:03 PM, Andreas Wacknitz wrote:

That doesn‘t work correctly either.

Von meinem iPhone gesendet


Am 21.02.2021 um 21:43 schrieb Stephan Althaus 
:

On 02/21/21 09:17 AM, Andreas Wacknitz wrote:

Am 21.02.21 um 09:10 schrieb Toomas Soome via openindiana-discuss:


On 21. Feb 2021, at 08:45, Tim Mooney via openindiana-discuss 
 wrote:


All-

My space-constrained OI hipster build VM is running low on space.

It looks like either pkg caching or pkg history is using quite a lot of
space:

$ pfexec du -ks /var/pkg/* | sort -n
0   /var/pkg/gui_cache
0   /var/pkg/lock
0   /var/pkg/modified
0   /var/pkg/ssl
6   /var/pkg/pkg5.image
955 /var/pkg/lost+found
5557/var/pkg/history
23086   /var/pkg/license
203166  /var/pkg/cache
241106  /var/pkg/state
9271692 /var/pkg/publisher

What is the correct, safe way to clean up anything from pkg that I don't
need?

The closest information I've found is an article from Oracle on "Minimize
Stored Image Metadata":

 https://docs.oracle.com/cd/E53394_01/html/E54739/minvarpkg.html

This suggests changing the 'flush-content-cache-on-success' property
to true (OI defaults to False).

Is that it, or are there other (generally safe) cleanup steps that I could
take too?  Is 'pkg purge-history' a good idea?


do not forget to check beadm list -a / zfs list -t snapshot

rgds,
toomas


I have a question regarding beadm destroy here:
I do regularly destroy old BEs with "pfexec beadm destroy "
keeping only a handful BEs.
Checking with "zfs list -t snapshot" shows that this won't destroy most
(all?) related snapshots, eg. it typically frees only some mb.
Thus, my rpool is filling over the time and I have to manually destroy
zfs snapshots that belong to deleted BEs.
Is that an intentional behavior of beadm destroy and is there something
how I can enhance on my procedure?

Regards,
Andreas

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss

Hello!


I use

beadm destroy -s 

to auto-destroy the corresponding snapshots. See "man beadm"


Greetings,

Stephan



___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss




___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-21 Thread Andreas Wacknitz
That doesn‘t work correctly either.

Von meinem iPhone gesendet

> Am 21.02.2021 um 21:43 schrieb Stephan Althaus 
> :
> 
> On 02/21/21 09:17 AM, Andreas Wacknitz wrote:
>>> Am 21.02.21 um 09:10 schrieb Toomas Soome via openindiana-discuss:
>>> 
 On 21. Feb 2021, at 08:45, Tim Mooney via openindiana-discuss 
  wrote:
 
 
 All-
 
 My space-constrained OI hipster build VM is running low on space.
 
 It looks like either pkg caching or pkg history is using quite a lot of
 space:
 
 $ pfexec du -ks /var/pkg/* | sort -n
 0   /var/pkg/gui_cache
 0   /var/pkg/lock
 0   /var/pkg/modified
 0   /var/pkg/ssl
 6   /var/pkg/pkg5.image
 955 /var/pkg/lost+found
 5557/var/pkg/history
 23086   /var/pkg/license
 203166  /var/pkg/cache
 241106  /var/pkg/state
 9271692 /var/pkg/publisher
 
 What is the correct, safe way to clean up anything from pkg that I don't
 need?
 
 The closest information I've found is an article from Oracle on "Minimize
 Stored Image Metadata":
 
 https://docs.oracle.com/cd/E53394_01/html/E54739/minvarpkg.html
 
 This suggests changing the 'flush-content-cache-on-success' property
 to true (OI defaults to False).
 
 Is that it, or are there other (generally safe) cleanup steps that I could
 take too?  Is 'pkg purge-history' a good idea?
 
>>> do not forget to check beadm list -a / zfs list -t snapshot
>>> 
>>> rgds,
>>> toomas
>>> 
>> I have a question regarding beadm destroy here:
>> I do regularly destroy old BEs with "pfexec beadm destroy "
>> keeping only a handful BEs.
>> Checking with "zfs list -t snapshot" shows that this won't destroy most
>> (all?) related snapshots, eg. it typically frees only some mb.
>> Thus, my rpool is filling over the time and I have to manually destroy
>> zfs snapshots that belong to deleted BEs.
>> Is that an intentional behavior of beadm destroy and is there something
>> how I can enhance on my procedure?
>> 
>> Regards,
>> Andreas
>> 
>> ___
>> openindiana-discuss mailing list
>> openindiana-discuss@openindiana.org
>> https://openindiana.org/mailman/listinfo/openindiana-discuss
> 
> Hello!
> 
> 
> I use
> 
> beadm destroy -s 
> 
> to auto-destroy the corresponding snapshots. See "man beadm"
> 
> 
> Greetings,
> 
> Stephan
> 
> 
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-21 Thread Stephan Althaus

On 02/21/21 09:17 AM, Andreas Wacknitz wrote:

Am 21.02.21 um 09:10 schrieb Toomas Soome via openindiana-discuss:


On 21. Feb 2021, at 08:45, Tim Mooney via openindiana-discuss 
 wrote:



All-

My space-constrained OI hipster build VM is running low on space.

It looks like either pkg caching or pkg history is using quite a lot of
space:

$ pfexec du -ks /var/pkg/* | sort -n
0   /var/pkg/gui_cache
0   /var/pkg/lock
0   /var/pkg/modified
0   /var/pkg/ssl
6   /var/pkg/pkg5.image
955 /var/pkg/lost+found
5557    /var/pkg/history
23086   /var/pkg/license
203166  /var/pkg/cache
241106  /var/pkg/state
9271692 /var/pkg/publisher

What is the correct, safe way to clean up anything from pkg that I 
don't

need?

The closest information I've found is an article from Oracle on 
"Minimize

Stored Image Metadata":

https://docs.oracle.com/cd/E53394_01/html/E54739/minvarpkg.html

This suggests changing the 'flush-content-cache-on-success' property
to true (OI defaults to False).

Is that it, or are there other (generally safe) cleanup steps that I 
could

take too?  Is 'pkg purge-history' a good idea?


do not forget to check beadm list -a / zfs list -t snapshot

rgds,
toomas


I have a question regarding beadm destroy here:
I do regularly destroy old BEs with "pfexec beadm destroy "
keeping only a handful BEs.
Checking with "zfs list -t snapshot" shows that this won't destroy most
(all?) related snapshots, eg. it typically frees only some mb.
Thus, my rpool is filling over the time and I have to manually destroy
zfs snapshots that belong to deleted BEs.
Is that an intentional behavior of beadm destroy and is there something
how I can enhance on my procedure?

Regards,
Andreas

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Hello!


I use

beadm destroy -s 

to auto-destroy the corresponding snapshots. See "man beadm"


Greetings,

Stephan



___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-21 Thread Richard L. Hamilton
While I'm not sure why extra snapshots seem to be involved, here's what I 
actually see when going through an update, looking at the results, and cleaning 
up.


# starting with just the running BE, no extra snapshots:
root@openindiana:~# pkg update
   Packages to install:  10
Packages to update: 377
   Create boot environment: Yes
Create backup boot environment:  No

DOWNLOADPKGS FILESXFER (MB)   SPEED
Completed387/387 5947/5947  116.2/116.2  617k/s

PHASE  ITEMS
Removing old actions   1139/1139
Installing new actions 1851/1851
Updating modified actions  6684/6684
Updating package state database Done 
Updating package cache   377/377 
Updating image stateDone 
Creating fast lookup database   Done 

A clone of openindiana-2021:02:20 exists and has been updated and activated.
On the next boot the Boot Environment openindiana-2021:02:21 will be
mounted on '/'.  Reboot when ready to switch to this updated BE.


---
NOTE: Please review release notes posted at:

https://docs.openindiana.org/release-notes/latest-changes/
---

root@openindiana:~# beadm list
BE Active Mountpoint Space   Policy Created
openindiana-2021:02:20 N  /  409.50K static 2021-02-20 02:01
openindiana-2021:02:21 R  -  18.03G  static 2021-02-21 03:43
root@openindiana:~# beadm list -a
BE/Dataset/Snapshot   Active Mountpoint 
Space  Policy Created
openindiana-2021:02:20
   rpool1/ROOT/openindiana-2021:02:20 N  /  
409.50K static 2021-02-20 02:01
   rpool1/ROOT/openindiana-2021:02:21@2021-02-21-08:43:19 -  -  
116K   static 2021-02-21 03:43
   rpool1/ROOT/openindiana-2021:02:21@2021-02-21-08:40:01 -  -  
1.00M  static 2021-02-21 03:40
   rpool1/ROOT/openindiana-2021:02:21/var@2021-02-21-08:43:19 -  -  
1.43M  static 2021-02-21 03:43
   rpool1/ROOT/openindiana-2021:02:21/var@2021-02-21-08:40:01 -  -  
30.66M static 2021-02-21 03:40
openindiana-2021:02:21
   rpool1/ROOT/openindiana-2021:02:21 R  -  
18.03G static 2021-02-21 03:43
root@openindiana:~# exec shutdown -i6 -y -g0

# after the reboot:

root@openindiana:~# beadm list
BE Active Mountpoint Space  Policy Created
openindiana-2021:02:20 -  -  22.53M static 2021-02-20 02:01
openindiana-2021:02:21 NR /  18.04G static 2021-02-21 03:43
root@openindiana:~# beadm list -a
BE/Dataset/Snapshot   Active Mountpoint 
Space  Policy Created
openindiana-2021:02:20
   rpool1/ROOT/openindiana-2021:02:20 -  -  
22.53M static 2021-02-20 02:01
   rpool1/ROOT/openindiana-2021:02:21@2021-02-21-08:43:19 -  -  
1.00M  static 2021-02-21 03:43
   rpool1/ROOT/openindiana-2021:02:21@2021-02-21-08:40:01 -  -  
1.00M  static 2021-02-21 03:40
   rpool1/ROOT/openindiana-2021:02:21/var@2021-02-21-08:43:19 -  -  
1.59M  static 2021-02-21 03:43
   rpool1/ROOT/openindiana-2021:02:21/var@2021-02-21-08:40:01 -  -  
30.66M static 2021-02-21 03:40
openindiana-2021:02:21
   rpool1/ROOT/openindiana-2021:02:21 NR /  
18.04G static 2021-02-21 03:43

# since it came up, I can probably blow away the old stuff:

root@openindiana:~# beadm destroy openindiana-2021:02:20
Are you sure you want to destroy openindiana-2021:02:20?
This action cannot be undone (y/[n]): y
Destroyed successfully
root@openindiana:~# beadm list -a
BE/Dataset/Snapshot   Active Mountpoint 
Space   Policy Created
openindiana-2021:02:21
   rpool1/ROOT/openindiana-2021:02:21 NR /  
18.04G  static 2021-02-21 03:43
   rpool1/ROOT/openindiana-2021:02:21/var@2021-02-21-08:40:01 -  -  
350.62M static 2021-02-21 03:40
   rpool1/ROOT/openindiana-2021:02:21@2021-02-21-08:40:01 -  -  
436.01M static 2021-02-21 03:40
root@openindiana:~# beadm destroy openindiana-2021:02:21@2021-02-21-08:40:01
Are you sure you want to destroy openindiana-2021:02:21@2021-02-21-08:40:01?
This action cannot be undone (y/[n]): y
Destroyed successfully
root@openindiana:~# beadm list -a
BE/Dataset/Snapshot   Active Mountpoint Space Policy Created
openindiana-2021:02:21
   rpool1/ROOT/openindiana-2021:02:21 NR /  17.27G static 
2021-02-21 03:43


> On Feb 21, 2021, at 03:17, Andreas Wacknitz  wrote:
> 
> Am 21.02.21 um 09:10 schrieb Toomas Soome via 

Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-21 Thread Toomas Soome via openindiana-discuss



> On 21. Feb 2021, at 10:17, Andreas Wacknitz  wrote:
> 
> Am 21.02.21 um 09:10 schrieb Toomas Soome via openindiana-discuss:
>> 
>>> On 21. Feb 2021, at 08:45, Tim Mooney via openindiana-discuss 
>>>  wrote:
>>> 
>>> 
>>> All-
>>> 
>>> My space-constrained OI hipster build VM is running low on space.
>>> 
>>> It looks like either pkg caching or pkg history is using quite a lot of
>>> space:
>>> 
>>> $ pfexec du -ks /var/pkg/* | sort -n
>>> 0   /var/pkg/gui_cache
>>> 0   /var/pkg/lock
>>> 0   /var/pkg/modified
>>> 0   /var/pkg/ssl
>>> 6   /var/pkg/pkg5.image
>>> 955 /var/pkg/lost+found
>>> 5557/var/pkg/history
>>> 23086   /var/pkg/license
>>> 203166  /var/pkg/cache
>>> 241106  /var/pkg/state
>>> 9271692 /var/pkg/publisher
>>> 
>>> What is the correct, safe way to clean up anything from pkg that I don't
>>> need?
>>> 
>>> The closest information I've found is an article from Oracle on "Minimize
>>> Stored Image Metadata":
>>> 
>>> https://docs.oracle.com/cd/E53394_01/html/E54739/minvarpkg.html
>>> 
>>> This suggests changing the 'flush-content-cache-on-success' property
>>> to true (OI defaults to False).
>>> 
>>> Is that it, or are there other (generally safe) cleanup steps that I could
>>> take too?  Is 'pkg purge-history' a good idea?
>>> 
>> do not forget to check beadm list -a / zfs list -t snapshot
>> 
>> rgds,
>> toomas
>> 
> I have a question regarding beadm destroy here:
> I do regularly destroy old BEs with "pfexec beadm destroy "
> keeping only a handful BEs.
> Checking with "zfs list -t snapshot" shows that this won't destroy most
> (all?) related snapshots, eg. it typically frees only some mb.
> Thus, my rpool is filling over the time and I have to manually destroy
> zfs snapshots that belong to deleted BEs.
> Is that an intentional behavior of beadm destroy and is there something
> how I can enhance on my procedure?
> 
> 

It is bug. It has been around for some time, but I never had enough time to fix 
it.

rgds,
toomas


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-21 Thread Richard L. Hamilton
Before you do that, just to cover the obvious, have you gotten rid of old boot 
environments (and the snapshots shown with beadm list -a) ?

> On Feb 21, 2021, at 01:45, Tim Mooney via openindiana-discuss 
>  wrote:
> 
> 
> All-
> 
> My space-constrained OI hipster build VM is running low on space.
> 
> It looks like either pkg caching or pkg history is using quite a lot of
> space:
> 
> $ pfexec du -ks /var/pkg/* | sort -n
> 0   /var/pkg/gui_cache
> 0   /var/pkg/lock
> 0   /var/pkg/modified
> 0   /var/pkg/ssl
> 6   /var/pkg/pkg5.image
> 955 /var/pkg/lost+found
> 5557/var/pkg/history
> 23086   /var/pkg/license
> 203166  /var/pkg/cache
> 241106  /var/pkg/state
> 9271692 /var/pkg/publisher
> 
> What is the correct, safe way to clean up anything from pkg that I don't
> need?
> 
> The closest information I've found is an article from Oracle on "Minimize
> Stored Image Metadata":
> 
>   https://docs.oracle.com/cd/E53394_01/html/E54739/minvarpkg.html
> 
> This suggests changing the 'flush-content-cache-on-success' property
> to true (OI defaults to False).
> 
> Is that it, or are there other (generally safe) cleanup steps that I could
> take too?  Is 'pkg purge-history' a good idea?
> 
> Thanks,
> 
> Tim
> -- 
> Tim Mooney tim.moo...@ndsu.edu
> Enterprise Computing & Infrastructure /
> Division of Information Technology/701-231-1076 (Voice)
> North Dakota State University, Fargo, ND 58105-5164
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
> 

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-21 Thread Andreas Wacknitz

Am 21.02.21 um 09:10 schrieb Toomas Soome via openindiana-discuss:



On 21. Feb 2021, at 08:45, Tim Mooney via openindiana-discuss 
 wrote:


All-

My space-constrained OI hipster build VM is running low on space.

It looks like either pkg caching or pkg history is using quite a lot of
space:

$ pfexec du -ks /var/pkg/* | sort -n
0   /var/pkg/gui_cache
0   /var/pkg/lock
0   /var/pkg/modified
0   /var/pkg/ssl
6   /var/pkg/pkg5.image
955 /var/pkg/lost+found
5557/var/pkg/history
23086   /var/pkg/license
203166  /var/pkg/cache
241106  /var/pkg/state
9271692 /var/pkg/publisher

What is the correct, safe way to clean up anything from pkg that I don't
need?

The closest information I've found is an article from Oracle on "Minimize
Stored Image Metadata":

https://docs.oracle.com/cd/E53394_01/html/E54739/minvarpkg.html

This suggests changing the 'flush-content-cache-on-success' property
to true (OI defaults to False).

Is that it, or are there other (generally safe) cleanup steps that I could
take too?  Is 'pkg purge-history' a good idea?


do not forget to check beadm list -a / zfs list -t snapshot

rgds,
toomas


I have a question regarding beadm destroy here:
I do regularly destroy old BEs with "pfexec beadm destroy "
keeping only a handful BEs.
Checking with "zfs list -t snapshot" shows that this won't destroy most
(all?) related snapshots, eg. it typically frees only some mb.
Thus, my rpool is filling over the time and I have to manually destroy
zfs snapshots that belong to deleted BEs.
Is that an intentional behavior of beadm destroy and is there something
how I can enhance on my procedure?

Regards,
Andreas

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-21 Thread Toomas Soome via openindiana-discuss



> On 21. Feb 2021, at 08:45, Tim Mooney via openindiana-discuss 
>  wrote:
> 
> 
> All-
> 
> My space-constrained OI hipster build VM is running low on space.
> 
> It looks like either pkg caching or pkg history is using quite a lot of
> space:
> 
> $ pfexec du -ks /var/pkg/* | sort -n
> 0   /var/pkg/gui_cache
> 0   /var/pkg/lock
> 0   /var/pkg/modified
> 0   /var/pkg/ssl
> 6   /var/pkg/pkg5.image
> 955 /var/pkg/lost+found
> 5557/var/pkg/history
> 23086   /var/pkg/license
> 203166  /var/pkg/cache
> 241106  /var/pkg/state
> 9271692 /var/pkg/publisher
> 
> What is the correct, safe way to clean up anything from pkg that I don't
> need?
> 
> The closest information I've found is an article from Oracle on "Minimize
> Stored Image Metadata":
> 
>   https://docs.oracle.com/cd/E53394_01/html/E54739/minvarpkg.html
> 
> This suggests changing the 'flush-content-cache-on-success' property
> to true (OI defaults to False).
> 
> Is that it, or are there other (generally safe) cleanup steps that I could
> take too?  Is 'pkg purge-history' a good idea?
> 

do not forget to check beadm list -a / zfs list -t snapshot

rgds,
toomas


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-20 Thread Richard Lowe
History is recorded for the user only.  If you don't use it you could purge
it.

Flushing the cache on success is fine too, anything it needs again will
just get redownloaded.
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss