Re: What's Meta v1?

2022-02-13 Thread Dan Cross
It is the version listed in the file `/var/db/pkg/Avalon.meta`. Oh my
dragonfly dev machine (which I basically never have time to poke at :-(
...) the contents are:

{"version":2,"packing_format":"txz","manifests":"packagesite.yaml","filesite":"filesite.yaml","manifests_archive":"packagesite","filesite_archive":"filesite"}

On the other hand, my Ham radio DragonFly machine, which is running
6.2-RELEASE, has this:

{"version":1,"packing_format":"txz","digest_format":"sha256_base32","digests":"digests","digests_archive":"digests","manifests":"packagesite.yaml","filesite":"filesite.yaml","manifests_archive":"packagesite","filesite_archive":"filesite"}

I'm not quite sure what creates or updates that file, nor what the
implication of simply updating the "version": in the latter file from 1 to
2 would be.

- Dan C.


On Sun, Feb 13, 2022 at 10:58 AM Pierre Abbat 
wrote:

> I just ran "pkg ins hs-stack" and got "WARNING: Meta v1 support will be
> removed in the next version". What's this mean?
>
> Pierre
> --
> gau do li'i co'e kei do
>
>
>
>


Re: DragonFly 6.2 released

2022-01-10 Thread Dan Cross
You already issued them. You are good to go; just follow the instructions
in the email. :-)

On Mon, Jan 10, 2022 at 5:56 PM Mario Marietto 
wrote:

> I am not very experienced. So,which commands should I issue ?
>
> Il giorno lun 10 gen 2022 alle ore 23:54 Dan Cross  ha
> scritto:
>
>> On Mon, Jan 10, 2022 at 5:47 PM Mario Marietto 
>> wrote:
>>
>>> Something is not went well here I think :
>>>
>>> I'm using this version before the upgrade :
>>>
>>> root@marietto:/usr/src # uname -a
>>>
>>> DragonFly marietto 6.1-DEVELOPMENT DragonFly
>>> v6.1.0.589.gd52e31-DEVELOPMENT #1: Sat Jan  1 17:38:32 CET 2022
>>> marietto@marietto:/usr/obj/usr/src/sys/X86_64_GENERIC  x86_64
>>>
>>> So,I want to upgrade it to 6,2 and I do :
>>>
>>> root@marietto:/home/marietto # cd /usr/src
>>>
>>> root@marietto:/usr/src # git fetch origin
>>> remote: Enumerating objects: 96, done.
>>> remote: Counting objects: 100% (96/96), done.
>>> remote: Compressing objects: 100% (68/68), done.
>>> remote: Total 68 (delta 56), reused 0 (delta 0), pack-reused 0
>>> Unpacking objects: 100% (68/68), 9.08 KiB | 31.00 KiB/s, done.
>>> From git://git.dragonflybsd.org/dragonfly
>>>d52e317013..1dddac0a52  master-> origin/master
>>>  * [new branch]DragonFly_RELEASE_6_2 ->
>>> origin/DragonFly_RELEASE_6_2
>>>  * [new tag]   v6.2.1-> v6.2.1
>>>  * [new tag]   v6.3.0-> v6.3.0
>>>
>>> root@marietto:/usr/src # git branch DragonFly_RELEASE_6_2
>>> origin/DragonFly_RELEASE_6_2
>>> Branch 'DragonFly_RELEASE_6_2' set up to track remote branch
>>> 'DragonFly_RELEASE_6_2' from 'origin'.
>>>
>>> root@marietto:/usr/src # git checkout DragonFly_RELEASE_6_2
>>> Switched to branch 'DragonFly_RELEASE_6_2'
>>> Your branch is up to date with 'origin/DragonFly_RELEASE_6_2'.
>>>
>>> root@marietto:/usr/src # git pull
>>> Already up to date. ---> already up to date ? it's not true,since I'm
>>> using 6.1shouldn't it get the new source code here ?
>>>
>>
>> You did when you checked out the branch. I believe the `git pull` is in
>> these instructions in the off-chance that somethings is backported to the
>> DragonFly_RELEASE_6_2 branch after this is written but before someone
>> upgrades.
>>
>> - Dan C.
>>
>> Il giorno lun 10 gen 2022 alle ore 21:12 Justin Sherrill <
>>> jus...@shiningsilence.com> ha scritto:
>>>
>>>> DragonFly 6.2.1 is released - here's the release page:
>>>>
>>>> https://www.dragonflybsd.org/release62/
>>>>
>>>> 6.2.0 was never released cause I screwed up tagging, so you can go
>>>> right from 6.0 to 6.2.1.  Here's the 6.2.1 tag with a list of the commits
>>>> since 5.8:
>>>>
>>>>
>>>> https://lists.dragonflybsd.org/pipermail/commits/2022-January/820802.html
>>>>
>>>> The normal ISO and IMG files are available for download and install,
>>>> plus an uncompressed ISO image for those installing remotely.
>>>>
>>>> If updating from an older version of DragonFly, bring in the 6.2 source:
>>>>
>>>> > cd /usr/src
>>>> > git fetch origin
>>>> > git branch DragonFly_RELEASE_6_2 origin/DragonFly_RELEASE_6_2
>>>> > git checkout DragonFly_RELEASE_6_2
>>>> > git pull
>>>>
>>>> And then rebuild: (still in /usr/src)
>>>> > make build-all
>>>> > make install-all
>>>> > make upgrade
>>>>
>>>> After your next reboot, you can optionally update your rescue system:
>>>>
>>>> (reboot)
>>>> > cd /usr/src
>>>> > make initrd
>>>>
>>>> Don't forget to upgrade your existing packages if you haven't recently:
>>>>
>>>> > pkg update
>>>> > pkg upgrade
>>>>
>>>>
>>>
>>> --
>>> Mario.
>>>
>>
>
> --
> Mario.
>


Re: DragonFly 6.2 released

2022-01-10 Thread Dan Cross
On Mon, Jan 10, 2022 at 5:47 PM Mario Marietto 
wrote:

> Something is not went well here I think :
>
> I'm using this version before the upgrade :
>
> root@marietto:/usr/src # uname -a
>
> DragonFly marietto 6.1-DEVELOPMENT DragonFly
> v6.1.0.589.gd52e31-DEVELOPMENT #1: Sat Jan  1 17:38:32 CET 2022
> marietto@marietto:/usr/obj/usr/src/sys/X86_64_GENERIC  x86_64
>
> So,I want to upgrade it to 6,2 and I do :
>
> root@marietto:/home/marietto # cd /usr/src
>
> root@marietto:/usr/src # git fetch origin
> remote: Enumerating objects: 96, done.
> remote: Counting objects: 100% (96/96), done.
> remote: Compressing objects: 100% (68/68), done.
> remote: Total 68 (delta 56), reused 0 (delta 0), pack-reused 0
> Unpacking objects: 100% (68/68), 9.08 KiB | 31.00 KiB/s, done.
> From git://git.dragonflybsd.org/dragonfly
>d52e317013..1dddac0a52  master-> origin/master
>  * [new branch]DragonFly_RELEASE_6_2 ->
> origin/DragonFly_RELEASE_6_2
>  * [new tag]   v6.2.1-> v6.2.1
>  * [new tag]   v6.3.0-> v6.3.0
>
> root@marietto:/usr/src # git branch DragonFly_RELEASE_6_2
> origin/DragonFly_RELEASE_6_2
> Branch 'DragonFly_RELEASE_6_2' set up to track remote branch
> 'DragonFly_RELEASE_6_2' from 'origin'.
>
> root@marietto:/usr/src # git checkout DragonFly_RELEASE_6_2
> Switched to branch 'DragonFly_RELEASE_6_2'
> Your branch is up to date with 'origin/DragonFly_RELEASE_6_2'.
>
> root@marietto:/usr/src # git pull
> Already up to date. ---> already up to date ? it's not true,since I'm
> using 6.1shouldn't it get the new source code here ?
>

You did when you checked out the branch. I believe the `git pull` is in
these instructions in the off-chance that somethings is backported to the
DragonFly_RELEASE_6_2 branch after this is written but before someone
upgrades.

- Dan C.

Il giorno lun 10 gen 2022 alle ore 21:12 Justin Sherrill <
> jus...@shiningsilence.com> ha scritto:
>
>> DragonFly 6.2.1 is released - here's the release page:
>>
>> https://www.dragonflybsd.org/release62/
>>
>> 6.2.0 was never released cause I screwed up tagging, so you can go right
>> from 6.0 to 6.2.1.  Here's the 6.2.1 tag with a list of the commits since
>> 5.8:
>>
>> https://lists.dragonflybsd.org/pipermail/commits/2022-January/820802.html
>>
>> The normal ISO and IMG files are available for download and install,
>> plus an uncompressed ISO image for those installing remotely.
>>
>> If updating from an older version of DragonFly, bring in the 6.2 source:
>>
>> > cd /usr/src
>> > git fetch origin
>> > git branch DragonFly_RELEASE_6_2 origin/DragonFly_RELEASE_6_2
>> > git checkout DragonFly_RELEASE_6_2
>> > git pull
>>
>> And then rebuild: (still in /usr/src)
>> > make build-all
>> > make install-all
>> > make upgrade
>>
>> After your next reboot, you can optionally update your rescue system:
>>
>> (reboot)
>> > cd /usr/src
>> > make initrd
>>
>> Don't forget to upgrade your existing packages if you haven't recently:
>>
>> > pkg update
>> > pkg upgrade
>>
>>
>
> --
> Mario.
>


Re: The DFly website is down.

2022-01-07 Thread Dan Cross
Surely the current hostname is `dragonflybsd.org`, not `dragonlfybsd.org` ?

On Fri, Jan 7, 2022 at 12:34 PM Mehmet Erol Sanliturk <
m.e.sanlit...@gmail.com> wrote:

>
> From Turkey , web site is not accessible  :
>
> "
> Hmm. We’re having trouble finding that site.
>
> We can’t connect to the server at dragonlfybsd.org.
> .
> .
> .
> "
>
> Mehmet Erol Sanliturk
>
>
> On Fri, Jan 7, 2022 at 8:14 PM Ladar Levison  wrote:
>
>> I assume everyone (but me) knows that dragonlfybsd.org isn't working? I
>> also can't load mirror-master.dragonflybsd.org. L~
>>
>


Re: RISC-V port?

2021-07-23 Thread Dan Cross
On Fri, Jul 23, 2021 at 7:47 AM Aaron LI  wrote:

> Hi Dan,
>
> We have interests in bringing DragonFly to other platforms, like RSIC-V
> and AArch64 (we even had a bounty for that for a long time). However, we’re
> really lacking developers to do that…
>

Yeah, I'm kind of suggesting that I take a swing at doing it myself. :-)

On the other hand, I think the current urgent task is to update the
> graphics stack to support more Intel and AMD GPUs.
>
> Cheers,
> Aaron
>
> > On Jul 23, 2021, at 19:24, Dan Cross  wrote:
> >
> > 
> > I'm playing around with OpenBSD on a HiFive Unmatched board, and thought
> it might be interesting to port Dragonfly to RISC-V. Would there be
> interest in such a thing?
> >
> > - Dan C.
> >
>


Re: Hammer errors.

2021-07-02 Thread Dan Cross
On Thu, Jul 1, 2021 at 6:11 PM Dan Cross  wrote:

> On Thu, Jul 1, 2021 at 3:00 PM Matthew Dillon 
> wrote:
>
>> Upgrade to 6.0 for sure, as it fixes at least one bug in HAMMER2, to
>> eliminate that possibility.
>>
>
> Yes, both the previous installation and the one I put together yesterday
> were running 6.0. The previous install had been upgraded to 6.0 when 6.0
> was released (or within a couple of days).
>
>
>> RAM is a possibility, though unlikely.  If you are overclocking, turn off
>> the overclocking.  An ovewrclocked CPU can introduce corruption more easily
>> than overclocked ram can.
>>
>
> Not overclocking. RAM in this machine is non-ECC: I could imagine a bit
> error slipping into a checksum, though.
>
> And check the dmesg for any NVME related errors.
>>
>
> No NVMe related errors appeared in the dmesg. That a completely separate
> NVMe part would exhibit the same problem would tend to indicate a hardware
> error outside of the storage device itself (RAM, bad CPU, I suppose, or a
> signal integrity issue when seating the NVMe part) or a software bug. That
> the system had previously been running the same version of the software for
> over a month without issue, and a fresh install popped up the same errors
> on the same hardware (modulo the new storage device) would suggest some
> sort of hardware issue.
>
> I'm going to run a memtest and see if I can get an NVMe diagnostic to run
> somehow.
>

I just wanted to close the loop on this.

The problem was bad RAM in the machine. The memory test failed
spectacularly, and it would appear some data in RAM got corrupted on the
way to the original NVMe. Replacing the RAM and rebuilding the filesystem
seems to be ok so far.

I'm doing a burnin to see if the problems manifest themselves again, but I
kind of suspect things have settled down.

Matt, thanks for the `hammer2 -vv show ...` tip. It's detecting no errors
now.

- Dan C.


Re: Hammer errors.

2021-07-01 Thread Dan Cross
On Thu, Jul 1, 2021 at 3:00 PM Matthew Dillon  wrote:

> Upgrade to 6.0 for sure, as it fixes at least one bug in HAMMER2, to
> eliminate that possibility.
>

Yes, both the previous installation and the one I put together yesterday
were running 6.0. The previous install had been upgraded to 6.0 when 6.0
was released (or within a couple of days).


> RAM is a possibility, though unlikely.  If you are overclocking, turn off
> the overclocking.  An ovewrclocked CPU can introduce corruption more easily
> than overclocked ram can.
>

Not overclocking. RAM in this machine is non-ECC: I could imagine a bit
error slipping into a checksum, though.

And check the dmesg for any NVME related errors.
>

No NVMe related errors appeared in the dmesg. That a completely separate
NVMe part would exhibit the same problem would tend to indicate a hardware
error outside of the storage device itself (RAM, bad CPU, I suppose, or a
signal integrity issue when seating the NVMe part) or a software bug. That
the system had previously been running the same version of the software for
over a month without issue, and a fresh install popped up the same errors
on the same hardware (modulo the new storage device) would suggest some
sort of hardware issue.

I'm going to run a memtest and see if I can get an NVMe diagnostic to run
somehow.

- Dan C.


Re: Hammer errors.

2021-07-01 Thread Dan Cross
Thanks, Matt, this is very helpful. I pulled some metadata dumps and
started poking at them. But in the interest of getting the machine back
online as soon as possible, I popped in another NVMe (a brand new part),
reinstalled Dragonfly (from scratch) and restored user data (which I
managed to grab with tar from the old NMVe -- no errors there,
fortunately). My intent was to poke at the funky NMVe on another machine.

Interestingly, the system started exhibiting the same errors sometime
overnight, with the new NVMe and a completely rebuilt filesystem.

Given that the kernel hasn't been upgraded to 6.0 when that came out, and
two separate NVMe parts started showing the exact same problem within ~a
day, I'm guessing something else other than the storage device being bad is
afoot. Potential culprits could be bad RAM (dropping a bit could certainly
manifest as a bad checksum), or perhaps an SI issue with the NVMe interface
in the machine. I'm going to poke at it a bit more.

Not exactly how I envisioned spending my Thursday, but hey.

- Dan C.

(PS: My "find bad files" trick was to use `ripgrep`: `rg laskdfjasdof891m
/` as root shows IO errors on a number of files -- the search pattern to rg
doesn't matter, I just typed some random gibberish that's unlikely to show
up in any real file.)

On Wed, Jun 30, 2021 at 12:31 PM Matthew Dillon 
wrote:

> It looks like several different blocks failed a CRC test in your logs.  It
> would make sense to try to track down exactly where.  If you want to dive
> the filesystem meta-data you can dump it with full CRC tests using:
>
> hammer2 -vv show /dev/serno/S59ANMFNB34055E-1.s1d  > (save to a file not
> on the filesystem)
>
> And then look for 'failed)' lines in the output and track the inodes back
> to see which files are affected.  Its a bit round-about and you have to get
> familiar with the meta-data format, but that gives the most comprehensive
> results.   The output file is typically a few gigabytes (depends how big
> the filesystem is).   For example, I wound up with a single data block
> error in a mail file on one of my systems, easily rectified by copying-away
> the file and then deleting it.  I usually dump the output to a file and
> then run less on it, then search for failed crc checks.
>
>   data.106 00051676000f 206a/16
>vol=0 mir=00149dc6
> mod=02572acb lfcnt=0
>(xxhash64
> 32:b65e740a8f5ce753/799af250bfaf8651 failed)
>
> A 'quick' way to try to locate problems is to use tar, something like
> this.  However, tar exits when it encounters the first error so that won't
> find everything, and if the problem is in a directory block that can
> complicate matters.
>
> tar --one-file-system -cvf /dev/null /
>
> -Matt
>
>


Hammer errors.

2021-06-30 Thread Dan Cross
I woke up to IO errors on my Dragonfly machine. dmesg shows this
representative sample:

xop_strategy_read: error 0001 loff=
xop_strategy_read: error 0001 loff=
xop_strategy_read: error 0001 loff=
chain 0002d557e00d.03 (data) meth=30 CHECK FAIL (flags=00140002,
bref/data 9cbebec7cf70e05e/82bd4a4d5e615881)
   Resides in root index - CRITICAL!!!
   In pfs ROOT on device serno/S59ANMFNB34055E-1.s1d
xop_strategy_read: error 0001 loff=
chain 0002d55f800e.03 (data) meth=30 CHECK FAIL (flags=00140002,
bref/data 1ad0e6d4985a5a26/49c183c5c504b3fd)
   Resides in root index - CRITICAL!!!
   In pfs ROOT on device serno/S59ANMFNB34055E-1.s1d
xop_strategy_read: error 0001 loff=
chain 0002d567400e.03 (data) meth=30 CHECK FAIL (flags=00140002,
bref/data c487220646639bc2/125e4d1e903b1cd6)
   Resides in root index - CRITICAL!!!
   In pfs ROOT on device serno/S59ANMFNB34055E-1.s1d
xop_strategy_read: error 0001 loff=
chain 0002d5730010.03 (data) meth=30 CHECK FAIL (flags=00140002,
bref/data d50e1a1014a3c3be/04022d5bee660bb4)
   Resides in root index - CRITICAL!!!
   In pfs ROOT on device serno/S59ANMFNB34055E-1.s1d
xop_strategy_read: error 0001 loff=0001
chain 0002d5730010.03 (data) meth=30 CHECK FAIL (flags=00140002,
bref/data d50e1a1014a3c3be/04022d5bee660bb4)
   Resides in root index - CRITICAL!!!
   In pfs ROOT on device serno/S59ANMFNB34055E-1.s1d
xop_strategy_read: error 0001 loff=0001

Which looks really pretty bad. There don't appear to be any errors from the
underlying device, however, and rebooting seems to have cleared it up
(which also seems kind of bad, in the sense that nondeterminism is never
fun).

I suppose the question is: how concerned should I be?

- Dan C.


Re: format of fortune.dat files

2014-11-13 Thread Dan Cross
Almost certainly a different header structure.  I don't think that the
Berkeley and Linux versions of fortune share much in the way of code, let
alone structure.  But I don't know.

On Thu, Nov 13, 2014 at 4:53 PM, Pierre Abbat p...@leaf.dragonflybsd.org
wrote:

 I have two files of sayings which I use in my sig in email. For two days my
 laptop (which runs Linux) was in the shop getting its memory tested, so I
 used
 my DragonFly box to read and write email. (It did not work very well. Kmail
 spent hours fetching email headers from my IMAP server; if I clicked on a
 message while it was doing this, it wouldn't display the message until it
 finished reading the folder, if at all.) I rsynced lots of stuff,
 including the
 sig files. The dat files turned out to be garbage in DFly. I got the
 laptop back
 today, copied the dat files to /tmp, and rsynced everything back. Here's
 the
 Linux version of one of the dat files:

 0002 0008 005a 0018  2500  003b
 005d 00a8 0104 013c 0156 017d 019f

 Here's the DragonFly version of the same file:

 0001  0008  005a  0018 
   2500    003b 
 005d  00a8  0104  013c 
 0156  017d  019f 

 Both are 64-bit OSes. Why does DragonFly have the extra zeros, making the
 file
 twice as big?

 Pierre
 --
 When a barnacle settles down, its brain disintegrates.
 Já não percebe nada, já não percebe nada.





Re: openldap authentication on DragonFly BSD

2013-11-26 Thread Dan Cross
On Tue, Nov 26, 2013 at 7:59 PM, Justin Sherrill jus...@shiningsilence.com
wrote:

 On Sun, Nov 24, 2013 at 9:30 PM, Predrag Punosevac punoseva...@gmail.com
wrote:

 I was wondering if somebody could point me to documentation explaining
 how to configure DragonFly BSD to authenticate its users vis LDAP
 server. I will briefly describe LDAP requirement.


 DragonFly compiles /bin and /sbin as static binaries, which is good if
you are worried about a problem making /usr unavailable. However, nss/pam
assume you have dynamic binaries and use that to load libraries, so that
can't be used - yet.  There's been some discussion of it previously,
including today on IRC #dragonfly, and some work there, but it isn't yet
set up.

 I may have some of the details wrong - someone can correct me if so.  I
could certainly use it.

I can't comment on the correctness, but this is one thing I kind of thing
OpenBSD gets right with their login_* framework: rather than link against
something, just use a separate binary to do the authentication.  PAM always
struck me as a solution looking for a wrong problem.

- Dan C.


Re: Amusing discussion of 3.2

2012-11-08 Thread Dan Cross
If it doesn't involve giving me money, they are wrong.

Email me off-list so I can tell you where to mail the checks.

Thank you for your support.

- Dan C.


On Thu, Nov 8, 2012 at 1:14 PM, Matthew Dillon
dil...@apollo.backplane.comwrote:

 I'm always amused when random people get into discussions about what
 I should do with my life :-)

 -Matt



Re: /bin/ls vs .dotted files

2012-09-15 Thread Dan Cross
On Sat, Sep 15, 2012 at 8:29 PM, Matthew Dillon
dil...@apollo.backplane.com wrote:
 I'm less interested in what people thought was correct 30 years
 ago, or even 10 years ago, and more interested in what makes the
 most sense today.  The reality is that if someone is just doing a
 basic 'ls' ungarnished with options they probably aren't interested in
 dot files.  It's a convenience that wasn't imagined 30 years ago
 because one didn't have ten thousand applications installed 30 years
 ago.

 It looks like older versions of linux had the root/-A behavior, but
 newer versions do not. At least for gnu ls.  In fact, considering how
 much 'ls' has forked over the years, I don't think a historical view
 is particularly helpful any more.

 I'm leaning towards making root and non-root behavior the same for ls,
 meaning not turning -A on for root by default.  Insofar as I can tell,
 that is where the larger community has been heading over the years.
 Even in FreeBSD where -A is still turned on for root, there were clearly
 enough people who wanted to turn the blasted thing off that they added
 a -I option.

If you are going to make a change, I suggest adding a '-I' and making
the default -A for *all* users.  Perhaps if people saw how much the
applications they install are littering their directory namespaces,
pressure would build to come up with a more sensible convention to
handle configuration.  Having an arbitrary class of files that are not
displayed by default is non-intuitive and just weird.

- Dan C.


Re: /bin/ls vs .dotted files

2012-09-15 Thread Dan Cross
On Sun, Sep 16, 2012 at 1:14 AM, Peter Avalos pe...@theshell.com wrote:
 On Sat, Sep 15, 2012 at 08:39:48PM +0530, Dan Cross wrote:
 If you are going to make a change, I suggest adding a '-I' and making
 the default -A for *all* users.  Perhaps if people saw how much the
 applications they install are littering their directory namespaces,
 pressure would build to come up with a more sensible convention to
 handle configuration.  Having an arbitrary class of files that are not
 displayed by default is non-intuitive and just weird.

 The standard says, Filenames beginning with a period ( '.' ) and any
 associated information shall not be written out unless explicitly
 referenced, the -A or -a option is supplied, or an
 implementation-defined condition causes them to be written.  We're not
 going to violate this by turning on -A for everyone.

Fair enough, but a pedantic nit: an implementation-defined condition
could be a declaration that that's how your version of ls works.
That's what other systems have done.

 As far as making root act the same way as everyone else, I'm fine with
 that.  If we do that, I recommend removing the -I option that was just
 added.

This all seems rather making a mountain out of a molehill.  Is there
really any pressing need to change anything?

- Dan C.