Your message dated Mon, 9 Oct 2023 12:38:18 +0300
with message-id <5d801ff9-9c4a-f508-26af-dd8e0ecf4...@debian.org>
and subject line Re: pytorch-sparse: FTBFS on arm64
has caused the Debian Bug report #1053302,
regarding pytorch-sparse: FTBFS on arm64
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact ow...@bugs.debian.org
immediately.)


-- 
1053302: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1053302
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems
--- Begin Message ---
Source: pytorch-sparse
Version: 0.6.17-1
Severity: serious
Tags: ftbfs sid trixie
Justification: fails to build from source (but built successfully in the past)
X-Debbugs-Cc: sramac...@debian.org

https://buildd.debian.org/status/fetch.php?pkg=pytorch-sparse&arch=arm64&ver=0.6.17-1&stamp=1694243524&raw=0


=================================== FAILURES ===================================
________________________ test_spmm[dtype5-device5-sum] _________________________

dtype = torch.float32, device = device(type='cpu'), reduce = 'sum'

    @pytest.mark.parametrize('dtype,device,reduce',
                             product(grad_dtypes, devices, reductions))
    def test_spmm(dtype, device, reduce):
        if device == torch.device('cuda:0') and dtype == torch.bfloat16:
            return  # Not yet implemented.
    
        src = torch.randn((10, 8), dtype=dtype, device=device)
        src[2:4, :] = 0  # Remove multiple rows.
        src[:, 2:4] = 0  # Remove multiple columns.
        src = SparseTensor.from_dense(src).requires_grad_()
        row, col, value = src.coo()
    
        other = torch.randn((2, 8, 2), dtype=dtype, device=device,
                            requires_grad=True)
    
        src_col = other.index_select(-2, col) * value.unsqueeze(-1)
        expected = torch_scatter.scatter(src_col, row, dim=-2, reduce=reduce)
        if reduce == 'min':
            expected[expected > 1000] = 0
        if reduce == 'max':
            expected[expected < -1000] = 0
    
        grad_out = torch.randn_like(expected)
    
        expected.backward(grad_out)
        expected_grad_value = value.grad
        value.grad = None
        expected_grad_other = other.grad
        other.grad = None
    
        out = matmul(src, other, reduce)
        out.backward(grad_out)
    
        atol = 1e-7
        if dtype == torch.float16 or dtype == torch.bfloat16:
            atol = 1e-1
    
        assert torch.allclose(expected, out, atol=atol)
        assert torch.allclose(expected_grad_value, value.grad, atol=atol)
>       assert torch.allclose(expected_grad_other, other.grad, atol=atol)
E       assert False
E        +  where False = <built-in method allclose of type object at 
0xffff8bbd1d90>(tensor([[[-1.2813e+00, -1.5149e+00],\n         [-5.9411e-02, 
-3.7580e-01],\n         [ 0.0000e+00,  0.0000e+00],\n       ...e+00],\n         
[ 1.9554e+00,  2.9660e+00],\n         [-2.2483e+00, -2.2663e+00],\n         [ 
4.1025e-03, -9.0971e-01]]]), tensor([[[-1.2813e+00, -1.5149e+00],\n         
[-5.9411e-02, -3.7580e-01],\n         [ 0.0000e+00,  0.0000e+00],\n       
...e+00],\n         [ 1.9554e+00,  2.9660e+00],\n         [-2.2483e+00, 
-2.2663e+00],\n         [ 4.1023e-03, -9.0971e-01]]]), atol=1e-07)
E        +    where <built-in method allclose of type object at 0xffff8bbd1d90> 
= torch.allclose
E        +    and   tensor([[[-1.2813e+00, -1.5149e+00],\n         
[-5.9411e-02, -3.7580e-01],\n         [ 0.0000e+00,  0.0000e+00],\n       
...e+00],\n         [ 1.9554e+00,  2.9660e+00],\n         [-2.2483e+00, 
-2.2663e+00],\n         [ 4.1023e-03, -9.0971e-01]]]) = tensor([[[ 4.5925e-01, 
-1.0188e+00],\n         [ 1.6147e-03,  1.0610e+00],\n         [-1.3520e+00, 
-9.0994e-01],\n       ...0452e-01,  7.3244e-01],\n         [-1.1243e+00,  
6.4083e-01],\n         [ 6.1791e-01,  2.0024e-01]]], requires_grad=True).grad

test/test_matmul.py:51: AssertionError
=============================== warnings summary ===============================
.pybuild/cpython3_3.11_torch-sparse/build/test/test_matmul.py::test_spspmm[dtype1-device1]
  
/<<PKGBUILDDIR>>/.pybuild/cpython3_3.11_torch-sparse/build/torch_sparse/matmul.py:97:
 UserWarning: Sparse CSR tensor support is in beta state. If you miss a 
functionality in the sparse tensor support, please submit a feature request to 
https://github.com/pytorch/pytorch/issues. (Triggered internally at 
./aten/src/ATen/SparseCsrTensorImpl.cpp:54.)
    C = torch.sparse.mm(A, B)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
FAILED test/test_matmul.py::test_spmm[dtype5-device5-sum] - assert False

Cheers
-- 
Sebastian Ramacher

--- End Message ---
--- Begin Message ---
Version: 0.6.18-1

Hi,

On Sun, 1 Oct 2023 10:58:06 +0200 Sebastian Ramacher <sramac...@debian.org> wrote:
Source: pytorch-sparse
Version: 0.6.17-1
Severity: serious
Tags: ftbfs sid trixie
Justification: fails to build from source (but built successfully in the past)
X-Debbugs-Cc: sramac...@debian.org

https://buildd.debian.org/status/fetch.php?pkg=pytorch-sparse&arch=arm64&ver=0.6.17-1&stamp=1694243524&raw=0

I have uploaded 0.6.18-1 today and it built successfully on buildd.

Best,
Andrius

--- End Message ---

Reply via email to