On Mon, Jun 2, 2025 at 9:30 AM Leon Deligny via NumPy-Discussion < numpy-discussion@python.org> wrote:
> ### Proposed new feature or change: > > Motivations: This is specific to 3D vector algebra. In Fluid Dynamics, we > have access to the moment of a force at a specific point (M_P = OP \cross > F). This calculation is crucial when determining the center of pressure > (CoP), a pivotal concept for understanding the distribution of forces on an > object submerged in a fluid. To accurately pinpoint the CoP, we often need > to "reverse" this process, effectively requiring an inverse functionality > for the cross product. > > How to compute the right-inverse: Given two real numbers a and c we want > to find b such that c = a \cross b . If we write this equation in matrix > format then we need to define: > > ```latex > A = \begin{pmatrix} > 0 & -a_3 & a_2 \\ > a_3 & 0 & -a_1 \\ > -a_2 & a_1 & 0 > \end{pmatrix} > ``` > > to get $c = A \cdot b$ (where $\cdot$ is the matrix multiplication). In > real case scenarios there does not exist a vector b such that $c = A \cdot > b$. So we can always find a vector b such that $|c - A \cdot b|_2^2$ is > minimal (i.e. b is the best approximation $c \approx A \cdot b$). > > When minimizing $|c - A \cdot b|_2^2$ there exists an analytical solution > found with gaussian elimination procedure (we are working with 3x3 matrices > and 3-dimensional vectors). > > I did not find any litterature on this subject specifically. But this > would be a greatly appreciated feature. > C.f. https://github.com/numpy/numpy/issues/29110 To be clear, I still don't think we're entertaining a feature request for inclusion in numpy, but we on the mailing list can perhaps help you implement this in your own code. Note that because `A` is singular, there are an infinite number of `b` values that will meet the criterion. The problem is not reducing the residual (you can always get to 0, or near enough in floating point terms), the problem is picking which of the infinite valid solutions you want. `np.linalg.lstsq()` will give you the minimum-norm solution, which is usually a pretty good option, though it may or may not suit your specific application. >>> import numpy as np >>> rng = np.random.default_rng(0x7cadf78fc1ad960cdb91bb4745b84b28) >>> av = rng.multivariate_normal(np.zeros(3), np.eye(3), size=10) >>> bv = rng.multivariate_normal(np.zeros(3), np.eye(3), size=10) >>> cv = np.cross(av, bv) >>> cv array([[ 0.78079691, -0.61265949, 0.43715586], [ 0.0508124 , 2.3512873 , 0.17093971], [ 0.39231732, 0.35281331, 0.90361588], [-0.36610523, 0.24177299, 0.49421533], [-0.51625536, 1.69312288, -0.9220552 ], [-0.59298882, 0.60810492, 2.38083992], [ 0.17308949, -0.59307124, -0.7205352 ], [-0.17391941, -0.43438387, -0.451717 ], [ 1.15385801, -0.80682863, -1.4479563 ], [-1.77839153, -2.30249409, 0.05814433]]) # Fancy way to compute the A matrix corresponding to the operation `np.cross(a, ...) >>> A = np.cross(av[0], -np.eye(3)) >>> A @ bv[0] array([ 0.78079691, -0.61265949, 0.43715586]) >>> cv[0] array([ 0.78079691, -0.61265949, 0.43715586]) >>> for a, b in zip(av, bv): ... c = np.cross(a, b) ... A = np.cross(a, -np.eye(3)) ... bhat = np.linalg.lstsq(A, c)[0] ... res = c - np.cross(a, bhat) ... print(res) ... [-2.22044605e-16 0.00000000e+00 -1.66533454e-16] [7.56339436e-16 8.88178420e-16 2.77555756e-16] [-5.55111512e-17 -2.77555756e-16 -2.22044605e-16] [-1.66533454e-16 -5.55111512e-17 0.00000000e+00] [0.00000000e+00 6.66133815e-16 3.33066907e-16] [-1.11022302e-16 0.00000000e+00 -4.44089210e-16] [ 0.00000000e+00 -1.11022302e-16 -2.22044605e-16] [-1.11022302e-16 1.11022302e-16 -5.55111512e-17] [ 0.00000000e+00 0.00000000e+00 -2.22044605e-16] [2.22044605e-16 4.44089210e-16 0.00000000e+00] -- Robert Kern
_______________________________________________ NumPy-Discussion mailing list -- numpy-discussion@python.org To unsubscribe send an email to numpy-discussion-le...@python.org https://mail.python.org/mailman3//lists/numpy-discussion.python.org Member address: arch...@mail-archive.com