Re: Working around Private.exports

2021-03-08 Thread Vivek Verma via Filesystem-dev
Hi Jorgen ,

This won't help you right now but can you file a bug asking for these to be 
available in the public exports file ?
Please email the bug id as well.

-Vivek

> On Mar 7, 2021, at 5:11 PM, Jorgen Lundman via Filesystem-dev 
>  wrote:
> 
> 
> Hello lists,
> 
> Is this mailing list still active, are there better places to ask?
> 
> So we are trying to remove our list of private.exports functions we use, 
> since that no longer works with BigSur.
> 
> For example:
> 
> vnode_iocount()
> 
> Used to avoid deadlock, it will call it async if the value is 1, otherwise 
> call it directly.  Frustratingly, hfs source call it without any magic, but I 
> assume kextloading checks for com.apple and allows Private.export functions. 
> Not sure I can think of an alternate method, but I really don't want to 
> ((unsigned long *)vnode)[44] - that is so hacky.
> 
> VFS_ROOT()
> Get the root of a given mount_t. VFS_ROOT itself is gone, and 
> vfs_vnodecovered() is in private. But I think I can call 
> vfs_stat(mount)->mounted_on -> vnode_lookup(). Feels a bit inefficient, but 
> it should work. We don't call it too often.
> 
> cpuid_info()
> Replace with inline asm.
> 
> build_path()
> Copy paste in build_path() code into own. Code duplication isn't too bad.
> 
> kauth_cred_getgroups()
> Not seeing a way around this one. Run without additional groups?
> 
> vfs_context_kernel()
> Not seeing a way around this one.
> 
> vnode_lookupat()
> Given a dvp, find a file. Re-write to build path from dvp to root, creating 
> full path, then call vnode_lookup(). should be possible.
> 
> bsd_hostname()
> Not seeing a way around this one. Ask userland for a hostname? (Just used as 
> identifier in multi-host failover reporting.)
> 
> 
> Can anyone think of methods around missing functions, or some other clever 
> things? The goal is to run on BigSur, but also ARM.
> 
> Sincerely,
> 
> Lund
> 
> 
> 
> 
> 
> ___
> Do not post admin requests to the list. They will be ignored.
> Filesystem-dev mailing list  (Filesystem-dev@lists.apple.com)
> Help/Unsubscribe/Update your Subscription:
> https://lists.apple.com/mailman/options/filesystem-dev/vivek_verma%40apple.com
> 
> This email sent to vivek_ve...@apple.com

 ___
Do not post admin requests to the list. They will be ignored.
Filesystem-dev mailing list  (Filesystem-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/filesystem-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: APFS cloning not working across volumes inside same container?

2019-05-06 Thread Vivek Verma


> On May 6, 2019, at 6:21 AM, Thomas Tempelmann  wrote:
> 
> I was just looking into making a Deduplication tool for macOS / APFS.
> 
> One common scenario would be to deduplicate macOS systems on separate 
> volumes. Like, for us developers who have several macOS versions on their 
> computers for software testing purposes.
> 
> I thought I could save a lot of space by having files inside /System and 
> /Library share the same space by relying on APFS's cloning feature.
> 
> However, when I tested copying a file from one volume to another volume, both 
> being in the same APFS container (partition), Finder would still copy the 
> data instead of doing the clone thing.
> 
> Now I wonder if that's just a shortcoming of the Finder or a problem with the 
> macOS API.
> 
> After all, since APFS shares a single catalog between all volumes of its 
> container, cloning should be possible across volumes, shouldn't it?


The think that most closely resembles the HFS catalog isn't shared amongst 
volumes. clonefile(2) has the same restriction as link(2) and rename(2), it has 
to be done within the same volume/mounted filesystem.


> -- 
> Thomas Tempelmann, http://apps.tempel.org/
> Follow me on Twitter: https://twitter.com/tempelorg
> Read my programming blog: http://blog.tempel.org/
> ___
> Do not post admin requests to the list. They will be ignored.
> Filesystem-dev mailing list  (Filesystem-dev@lists.apple.com)
> Help/Unsubscribe/Update your Subscription:
> https://lists.apple.com/mailman/options/filesystem-dev/vivek_verma%40apple.com
> 
> This email sent to vivek_ve...@apple.com

 ___
Do not post admin requests to the list. They will be ignored.
Filesystem-dev mailing list  (Filesystem-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/filesystem-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: Durably writing resource forks and other extended attributes

2018-07-25 Thread Vivek Verma


> On Jul 11, 2018, at 4:20 AM, Sujay Jayakar  wrote:
> 
> Hi all,
> 
> I'm trying to write file's contents, resource fork, and extended attributes 
> durably on both HFS+ and APFS.  My current approach is to use `write` and 
> `fsetxattr` to the write to the file and then `fcntl(fd, F_FULLFSYNC)` 
> afterwards.  Is this sufficient?
> 

yes.

> Looking at the HFS+ source, I was able to find that...
> Regular files have their content synced on `F_FULLSYNC`, as expected [1].
> All files' resource forks are synced immediately on `fsetxattr` [2].
> It looks like we skip syncing data on directory nodes [3].  If an xattr on a 
> directory is large enough to be in an extent, does this skip waiting on its 
> blocks to be flushed?
> What about extended attributes, both inline and extent-based? 

They are synchronous writes. resource forks are seperately sync'ed in fsetxattr 
because they behave slightly differently than all other extended attributes.

> 
> Next, what guarantees do we have for resource forks and extended attributes 
> with `F_FULLSYNC` on APFS?

same as HFS.

> 
> Thanks,
> Sujay
> 
> [1] 
> https://github.com/apple/darwin-xnu/blob/xnu-2782.1.97/bsd/hfs/hfs_vnops.c#L3011
>  
> 
> [2] 
> https://github.com/apple/darwin-xnu/blob/xnu-2782.1.97/bsd/hfs/hfs_xattr.c#L926
>  
> 
> [3] 
> https://github.com/apple/darwin-xnu/blob/xnu-2782.1.97/bsd/hfs/hfs_vnops.c#L2896
>  
> 
> 
> ___
> Do not post admin requests to the list. They will be ignored.
> Filesystem-dev mailing list  (Filesystem-dev@lists.apple.com)
> Help/Unsubscribe/Update your Subscription:
> https://lists.apple.com/mailman/options/filesystem-dev/vivek_verma%40apple.com
> 
> This email sent to vivek_ve...@apple.com

 ___
Do not post admin requests to the list. They will be ignored.
Filesystem-dev mailing list  (Filesystem-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/filesystem-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: How to use mandatory file locking on afpfs volumes

2018-04-03 Thread Vivek Verma


> On Apr 3, 2018, at 10:53 AM, James Bucanek  wrote:
> 
> Kevin,
> 
> I tested this in a variety of configurations. The tests consisted of setting 
> up an AFP shared volume and then testing open(2) using O_SHLOCK and O_EXLOCK 
> options when opening a file on that volume from two different client 
> processes.
> 
> AFP server = Airport Extreme, Client #1 on machine A, Client #2 on machine B
> AFP server = Airport Extreme, Client #1 on machine A, Client #2 on machine A
> AFP server = machine C with macOS AFP Sharing, Client #1 on machine A, Client 
> #2 on machine B
> AFP server = machine C with macOS AFP Sharing, Client #1 on machine A, Client 
> #2 on machine A
> AFP server = machine C with macOS AFP Sharing, Client #1 on machine A, Client 
> #2 on machine C (same computer running the AFP server)
> 
> In all cases, the open(2) O_SHLOCK and O_EXLOCK worked exactly as they would 
> with a volume that reports it supports POSIX file locking 
> (VOL_CAP_INT_FLOCK=YES). That is, if one client obtains a shared lock no 
> other clients can obtain an exclusive lock, and if one client obtains an 
> exclusive lock no other clients can obtain any lock. Most importantly, the 
> open() call does NOT return an EOPNOTSUPP error, which is what the man page 
> says it will do if "the underlying filesystem does not support locking".
> 
> This is consistent in every configuration, with one notable exception: if you 
> open a file on a remote APF volume with O_SHLOCK or O_EXLOCK from a client 
> machine and you then attempt to perform a regular open() (without either 
> O_SHLOCK or O_EXLOCK) from a second process, the open will fail with a 
> "resource unavailable" error, just as if you had requested a file lock. This 
> is consistent with AFP-style mandatory locking, but inconsistent with 
> POSIX-style advisory locking.

Advisory locking as defined by POSIX is bunch of sentences thrown together 
which make no sense when you actually try to use it in a real world application.

AFP's  locking protocol 
(https://developer.apple.com/library/content/documentation/Networking/Conceptual/AFP/UsingForkCommands/UsingForkCommands.html#//apple_ref/doc/uid/TP4854-CH225-SW1
 
)
  borrows some API elements (on macOS) from the advisory locking interfaces , 
so some calls may on the surface appear to work the same but it does _not_  
completely support POSIX style advisory locking and that's why the macOS AFP 
client  does not claim to support it.  It has its own locking implementation 
which interface-wise may overlap a bit with the interfaces for advisory 
locking. 


> 
> James
> 
>> Kevin Elliott  March 28, 2018 at 5:15 PM
>> 
>> 
>> Were you testing this on two different machines simultaneously? The point of 
>> AFP locking is to protect the file contents from simultaneous access on 
>> different machines, not within the same machine.
>> 
>> -Kevin
>> 
>> 
>> James Bucanek  March 13, 2018 at 12:18 PM
>> Follow up on my own post...
>> 
>> My intention is to perform file locking on remote volumes, including AFPFS 
>> volumes. So far, I've found no documentation on how to implement "AFP-style" 
>> mandatory locking using fcntl().
>> 
>> Then I remember that, while deprecated, the now-ancient FS API is still 
>> around. Plan B was born: use FSOpenFork's deny-read & deny-write permissions 
>> to implement file locking, then turn around and use open(2) to access the 
>> data.
>> 
>> That didn't work. As soon as I opened the file via FSOpenFork, open(2) would 
>> fail with "resource not available" error.
>> 
>> Hmm, that sounds exactly like what happens when open(2) tries to open a file 
>> that has been opened by another process with O_SHLOCK or O_EXLOCK.
>> 
>> So it would appear that, rather than being completely different file locking 
>> mechanisms, FSOpenFork() bridges to the advisory locking provided by 
>> open(2). And when I rewrite my test code to use open(2) O_SHLOCK and 
>> O_EXLOCK to access the same AFPFS volume from two different clients, it 
>> works perfectly!
>> 
>> Now I'm wondering why the AFPFS volume (this on a Time Capsule, by the way) 
>> reports VOL_CAP_INT_FLOCK=NO and VOL_CAP_INT_ADVLOCK=NO. The functionality 
>> of open(2) on that volume would appear to directly contradict that 
>> capability test.
>> 
>> Can anyone explain this?
>> 
> 
> ___
> Do not post admin requests to the list. They will be ignored.
> Filesystem-dev mailing list  (Filesystem-dev@lists.apple.com)
> Help/Unsubscribe/Update your Subscription:
> https://lists.apple.com/mailman/options/filesystem-dev/vivek_verma%40apple.com
> 
> This email sent to vivek_ve...@apple.com

 ___
Do not post admin 

Re: How to use mandatory file locking on afpfs volumes

2018-03-28 Thread Vivek Verma


> On Feb 26, 2018, at 12:11 PM, James Bucanek  wrote:
> 
> More annoying questions...
> 
> How does one use "AFP-style mandatory file locking" in macOS (10.9...current)?
> 
> I'm trying to implement file locking/coordination across various filesystems 
> and servers.
> 
> The man page for getattrlist() includes the following filesystem capability 
> test:
> 
> VOL_CAP_INT_MANLOCK  If this bit is set, the volume format implementation
> supports AFP-style mandatory byte range locks
> via ioctl(2).
> 
> AFPFS (a.k.a. AppleShare) volumes, particularly those on AirPort Extreme / 
> Time Capsule devices, return 1 for this capability. These volumes do not 
> support flock() or fcntl(F_SETLK) style advisory locking.
> 
> I looked at the man page for ioctl(2) and it contains no real information. It 
> says the commands for ioctl(2) are documented in .
> 
> So I had a look at  and found very little there; it's basically 
> an umbrella header for several other headers. So I went through them one at a 
> time: , ,  (the one I would have 
> assumed it would be in), and . I could found no mention of any 
> kind of file or byte range locking commands defined.
> 
> So, is it possible to lock files/ranges on an AFPFS volume using ioctl(2), 
> and if so where are these commands defined?

> 

You're going to be in deprecated & unsupported territory but you're going to 
find your answers in

http://www.sqlite.org/src/raw/src/os_unix.c?name=10e0c4dcdbec8d4189890fdf3e71b32efae194e3

> 
> 
> ___
> Do not post admin requests to the list. They will be ignored.
> Filesystem-dev mailing list  (Filesystem-dev@lists.apple.com)
> Help/Unsubscribe/Update your Subscription:
> https://lists.apple.com/mailman/options/filesystem-dev/vivek_verma%40apple.com
> 
> This email sent to vivek_ve...@apple.com

 ___
Do not post admin requests to the list. They will be ignored.
Filesystem-dev mailing list  (Filesystem-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/filesystem-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: So what does VOL_CAP_INT_USERACCESS actually mean?

2018-02-23 Thread Vivek Verma

> On Feb 23, 2018, at 3:56 PM, James Bucanek  wrote:
> 
> Hello,
> 
> I have a (low priority) question about the VOL_CAP_INT_USERACCESS volume 
> attribute.
> 
> I often get the ATTR_CMN_USERACCESS value for files via getattrlist().

Surely then you would want to question VOL_CAP_INT_ATTRLIST :-)  ?

> Today, while doing unrelated research, I stumbled across the description for 
> VOL_CAP_INT_USERACCESS which says "If this bit is set the volume format 
> implementation supports the ATTR_CMN_USERACCESS attribute."
> 
> This took me quit by surprise, since ATTR_CMN_USERACCESS has never failed me. 
> In testing I've now identified volumes that report VOL_CAP_INT_USERACCESS=0, 
> but when I get the attributes for a file and request ATTR_CMN_USERACCESS it 
> reports completely reasonable values.
> 
> So what does it mean, in terms of requesting the ATTR_CMN_USERACCESS 
> attribute, if a volume reports VOL_CAP_INT_USERACCESS=0.
> 

getattrlist(2) used to be part of the Filesystem implementation and so it was 
possible for you to potentially not  ATTR_CMN_USERACCESS  back from a 
particular implementation. getattrlist(2) today is implemented in VFS so this 
has become somewhat non useful for getattrlist(2) itself  but can be useful for 
getdirentriesattr(2) (which is deprecated in favor of getattrlistbulk(2)). 

(As for  VOL_CAP_INT_ATTRLIST, we OR it in always so you'll never see it as 0)

This sounds like something we should fix in our documentation so please file a 
bug.


> Thanks,
> 
> James
> ___
> Do not post admin requests to the list. They will be ignored.
> Filesystem-dev mailing list  (Filesystem-dev@lists.apple.com)
> Help/Unsubscribe/Update your Subscription:
> https://lists.apple.com/mailman/options/filesystem-dev/vivek_verma%40apple.com
> 
> This email sent to vivek_ve...@apple.com

 ___
Do not post admin requests to the list. They will be ignored.
Filesystem-dev mailing list  (Filesystem-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/filesystem-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com


Re: deadlock in 10.12, maybe?

2016-12-07 Thread Vivek Verma
0x
>vu_fifoinfo = 0x
>vu_ubcinfo = 0x
>  }
>  v_cleanblkhd = {
>lh_first = 0x
>  }
>  v_dirtyblkhd = {
>lh_first = 0x
>  }
>  v_knotes = {
>slh_first = 0x
>  }
>  v_cred = 0xff802f74e7f0
>  v_authorized_actions = 0x0880
>  v_cred_timestamp = 0x
>  v_nc_generation = 0x0006
>  v_numoutput = 0x
>  v_writecount = 0x
>  v_name = 0xff802f2236b0 "3C836C29-F9D8-48EB-801E-817468FE3B07"
>  v_parent = 0xff8033b6bf80
>  v_lockf = 0x
>  v_op = 0xff8036b22000
>  v_mount = 0xff80370e5040
>  v_data = 0xff90cd702d88
>  v_label = 0x
>  v_resolve = 0x0000
> }
> 
> Although, vflush() has
> 
>if (((vp->v_usecount == 0) ||
>((vp->v_usecount - vp->v_kusecount) == 0))) {
> [snip]
>vnode_reclaim_internal(vp, 1, 1, 0);
> 
> So that is confusing. But I will bow out here.
> 
> 
> But if the PanicDump is still wanted, say the word and I'll go file a
> radar, or just make it available for download.

A radar with the coredump attached would be the most helpful.

-thanks !
Vivek


> 
> 
> Lund
> 
> 
> Vivek Verma wrote:
>> Can you file a radar for this  (with preferably the kernel core dump 
>> attached) ? I would like see some vnode state which isn't in just the stack 
>> trace.
>> 
>> 
> 
> -- 
> Jorgen Lundman   | <lund...@lundman.net>
> Unix Administrator   | +81 (0)90-5578-8500
> Shibuya-ku, Tokyo| Japan
> 


 ___
Do not post admin requests to the list. They will be ignored.
Filesystem-dev mailing list  (Filesystem-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/filesystem-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com