Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package python-joblib for openSUSE:Factory 
checked in at 2024-05-11 18:18:48
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-joblib (Old)
 and      /work/SRC/openSUSE:Factory/.python-joblib.new.1880 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-joblib"

Sat May 11 18:18:48 2024 rev:26 rq:1172881 version:1.4.2

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-joblib/python-joblib.changes      
2024-04-23 18:55:08.784460184 +0200
+++ /work/SRC/openSUSE:Factory/.python-joblib.new.1880/python-joblib.changes    
2024-05-11 18:18:58.469356212 +0200
@@ -1,0 +2,11 @@
+Thu May  9 08:36:55 UTC 2024 - Dirk Müller <dmuel...@suse.com>
+
+- update to 1.4.2:
+  * Due to maintenance issues, 1.4.1 was not valid and we bumped
+    the version to 1.4.2
+  * Fix a backward incompatible change in MemorizedFunc.call
+    which needs to return the metadata. Also make sure that
+    NotMemorizedFunc.call return an empty dict for metadata for
+    consistency. https://github.com/joblib/joblib/pull/1576
+
+-------------------------------------------------------------------

Old:
----
  joblib-1.4.0.tar.gz

New:
----
  joblib-1.4.2.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-joblib.spec ++++++
--- /var/tmp/diff_new_pack.ZvztIy/_old  2024-05-11 18:18:59.137380538 +0200
+++ /var/tmp/diff_new_pack.ZvztIy/_new  2024-05-11 18:18:59.137380538 +0200
@@ -18,7 +18,7 @@
 
 %{?sle15_python_module_pythons}
 Name:           python-joblib
-Version:        1.4.0
+Version:        1.4.2
 Release:        0
 Summary:        Module for using Python functions as pipeline jobs
 License:        BSD-3-Clause

++++++ joblib-1.4.0.tar.gz -> joblib-1.4.2.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/joblib-1.4.0/CHANGES.rst new/joblib-1.4.2/CHANGES.rst
--- old/joblib-1.4.0/CHANGES.rst        2024-04-08 17:05:17.000000000 +0200
+++ new/joblib-1.4.2/CHANGES.rst        2024-05-02 14:11:23.000000000 +0200
@@ -1,17 +1,29 @@
 Latest changes
 ==============
 
+Release 1.4.2 -- 2024/05/02
+---------------------------
+
+Due to maintenance issues, 1.4.1 was not valid and we bumped the version to 
1.4.2
+
+
+- Fix a backward incompatible change in ``MemorizedFunc.call`` which needs to
+  return the metadata. Also make sure that ``NotMemorizedFunc.call`` return
+  an empty dict for metadata for consistency.
+  https://github.com/joblib/joblib/pull/1576
+
+
 Release 1.4.0 -- 2024/04/08
 ---------------------------
 
 - Allow caching co-routines with `Memory.cache`.
   https://github.com/joblib/joblib/pull/894
-  
+
 - Try to cast ``n_jobs`` to int in parallel and raise an error if
   it fails. This means that ``n_jobs=2.3`` will now result in
   ``effective_n_jobs=2`` instead of failing.
   https://github.com/joblib/joblib/pull/1539
-  
+
 - Ensure that errors in the task generator given to Parallel's call
   are raised in the results consumming thread.
   https://github.com/joblib/joblib/pull/1491
@@ -28,7 +40,7 @@
 - dask backend now supports ``return_as=generator`` and
   ``return_as=generator_unordered``.
   https://github.com/joblib/joblib/pull/1520
-  
+
 - Vendor cloudpickle 3.0.0 and end support for Python 3.7 which has
   reached end of life.
   https://github.com/joblib/joblib/pull/1487
@@ -78,7 +90,7 @@
 - Drop runtime dependency on ``distutils``. ``distutils`` is going away
   in Python 3.12 and is deprecated from Python 3.10 onwards. This import
   was kept around to avoid breaking scikit-learn, however it's now been
-  long enough since scikit-learn deployed a fixed (verion 1.1 was released
+  long enough since scikit-learn deployed a fixed (version 1.1 was released
   in May 2022) that it should be safe to remove this.
   https://github.com/joblib/joblib/pull/1361
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/joblib-1.4.0/PKG-INFO new/joblib-1.4.2/PKG-INFO
--- old/joblib-1.4.0/PKG-INFO   2024-04-08 17:06:26.276729300 +0200
+++ new/joblib-1.4.2/PKG-INFO   2024-05-02 14:12:05.793951000 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 2.1
 Name: joblib
-Version: 1.4.0
+Version: 1.4.2
 Summary: Lightweight pipelining with Python functions
 Author-email: Gael Varoquaux <gael.varoqu...@normalesup.org>
 License: BSD 3-Clause
@@ -33,7 +33,7 @@
    :target: https://badge.fury.io/py/joblib
    :alt: Joblib version
 
-.. |Azure| image:: 
https://dev.azure.com/joblib/joblib/_apis/build/status/joblib.joblib?branchName=master
+.. |Azure| image:: 
https://dev.azure.com/joblib/joblib/_apis/build/status/joblib.joblib?branchName=main
    :target: 
https://dev.azure.com/joblib/joblib/_build?definitionId=3&_a=summary&branchFilter=40
    :alt: Azure CI status
 
@@ -41,7 +41,7 @@
     :target: https://joblib.readthedocs.io/en/latest/?badge=latest
     :alt: Documentation Status
 
-.. |Codecov| image:: 
https://codecov.io/gh/joblib/joblib/branch/master/graph/badge.svg
+.. |Codecov| image:: 
https://codecov.io/gh/joblib/joblib/branch/main/graph/badge.svg
    :target: https://codecov.io/gh/joblib/joblib
    :alt: Codecov coverage
 
@@ -58,7 +58,7 @@
     git clone https://github.com/joblib/joblib.git
 
 If you don't have git installed, you can download a zip
-of the latest code: 
https://github.com/joblib/joblib/archive/refs/heads/master.zip
+of the latest code: 
https://github.com/joblib/joblib/archive/refs/heads/main.zip
 
 Installing
 ==========
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/joblib-1.4.0/README.rst new/joblib-1.4.2/README.rst
--- old/joblib-1.4.0/README.rst 2024-04-08 14:26:43.000000000 +0200
+++ new/joblib-1.4.2/README.rst 2024-04-09 09:52:44.000000000 +0200
@@ -4,7 +4,7 @@
    :target: https://badge.fury.io/py/joblib
    :alt: Joblib version
 
-.. |Azure| image:: 
https://dev.azure.com/joblib/joblib/_apis/build/status/joblib.joblib?branchName=master
+.. |Azure| image:: 
https://dev.azure.com/joblib/joblib/_apis/build/status/joblib.joblib?branchName=main
    :target: 
https://dev.azure.com/joblib/joblib/_build?definitionId=3&_a=summary&branchFilter=40
    :alt: Azure CI status
 
@@ -12,7 +12,7 @@
     :target: https://joblib.readthedocs.io/en/latest/?badge=latest
     :alt: Documentation Status
 
-.. |Codecov| image:: 
https://codecov.io/gh/joblib/joblib/branch/master/graph/badge.svg
+.. |Codecov| image:: 
https://codecov.io/gh/joblib/joblib/branch/main/graph/badge.svg
    :target: https://codecov.io/gh/joblib/joblib
    :alt: Codecov coverage
 
@@ -29,7 +29,7 @@
     git clone https://github.com/joblib/joblib.git
 
 If you don't have git installed, you can download a zip
-of the latest code: 
https://github.com/joblib/joblib/archive/refs/heads/master.zip
+of the latest code: 
https://github.com/joblib/joblib/archive/refs/heads/main.zip
 
 Installing
 ==========
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/joblib-1.4.0/azure-pipelines.yml 
new/joblib-1.4.2/azure-pipelines.yml
--- old/joblib-1.4.0/azure-pipelines.yml        2024-04-08 14:26:43.000000000 
+0200
+++ new/joblib-1.4.2/azure-pipelines.yml        2024-04-09 09:51:51.000000000 
+0200
@@ -9,10 +9,10 @@
   displayName: Daily build
   branches:
     include:
-    - master
+    - main
 
 trigger:
-- master
+- main
 
 jobs:
 - job: linting
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/joblib-1.4.0/conftest.py new/joblib-1.4.2/conftest.py
--- old/joblib-1.4.0/conftest.py        2024-04-08 14:26:43.000000000 +0200
+++ new/joblib-1.4.2/conftest.py        2024-05-02 10:05:25.000000000 +0200
@@ -8,6 +8,7 @@
 
 from joblib.parallel import mp
 from joblib.backports import LooseVersion
+from joblib import Memory
 try:
     import lz4
 except ImportError:
@@ -84,3 +85,11 @@
     # Note that we also use a shorter timeout for the per-test callback
     # configured via the pytest-timeout extension.
     faulthandler.dump_traceback_later(60, exit=True)
+
+
+@pytest.fixture(scope='function')
+def memory(tmp_path):
+    "Fixture to get an independent and self-cleaning Memory"
+    mem = Memory(location=tmp_path, verbose=0)
+    yield mem
+    mem.clear()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/joblib-1.4.0/doc/parallel.rst 
new/joblib-1.4.2/doc/parallel.rst
--- old/joblib-1.4.0/doc/parallel.rst   2024-04-08 14:29:20.000000000 +0200
+++ new/joblib-1.4.2/doc/parallel.rst   2024-05-02 10:05:25.000000000 +0200
@@ -47,7 +47,7 @@
 
 Future releases are planned to also support returning a generator that yields
 the results in the order of completion rather than the order of submission, by
-using ``return_as="unordered_generator"`` instead of ``return_as="generator"``.
+using ``return_as="generator_unordered"`` instead of ``return_as="generator"``.
 In this case the order of the outputs will depend on the concurrency of workers
 and will not be guaranteed to be deterministic, meaning the results can be
 yielded with a different order every time the code is executed.
@@ -260,7 +260,7 @@
 ``ParallelBackendBase``. Please refer to the `default backends source code`_ as
 a reference if you want to implement your own custom backend.
 
-.. _`default backends source code`: 
https://github.com/joblib/joblib/blob/master/joblib/_parallel_backends.py
+.. _`default backends source code`: 
https://github.com/joblib/joblib/blob/main/joblib/_parallel_backends.py
 
 Note that it is possible to register a backend class that has some mandatory
 constructor parameters such as the network address and connection credentials
@@ -418,4 +418,4 @@
 
 .. autoclass:: joblib.parallel.ParallelBackendBase
 
-.. autoclass:: joblib.parallel.AutoBatchingMixin
\ No newline at end of file
+.. autoclass:: joblib.parallel.AutoBatchingMixin
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/joblib-1.4.0/joblib/__init__.py 
new/joblib-1.4.2/joblib/__init__.py
--- old/joblib-1.4.0/joblib/__init__.py 2024-04-08 17:05:17.000000000 +0200
+++ new/joblib-1.4.2/joblib/__init__.py 2024-05-02 14:11:23.000000000 +0200
@@ -106,7 +106,7 @@
 # Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer.
 # 'X.Y.dev0' is the canonical version of 'X.Y.dev'
 #
-__version__ = '1.4.0'
+__version__ = '1.4.2'
 
 
 import os
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/joblib-1.4.0/joblib/externals/loky/process_executor.py 
new/joblib-1.4.2/joblib/externals/loky/process_executor.py
--- old/joblib-1.4.0/joblib/externals/loky/process_executor.py  2024-04-08 
14:26:43.000000000 +0200
+++ new/joblib-1.4.2/joblib/externals/loky/process_executor.py  2024-04-09 
09:53:52.000000000 +0200
@@ -494,7 +494,7 @@
                     # The GC managed to free the memory: everything is fine.
                     continue
 
-                # The process is leaking memory: let the master process
+                # The process is leaking memory: let the main process
                 # know that we need to start a new worker.
                 mp.util.info("Memory leak detected: shutting down worker")
                 result_queue.put(pid)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/joblib-1.4.0/joblib/memory.py 
new/joblib-1.4.2/joblib/memory.py
--- old/joblib-1.4.0/joblib/memory.py   2024-04-08 14:29:25.000000000 +0200
+++ new/joblib-1.4.2/joblib/memory.py   2024-05-02 14:11:23.000000000 +0200
@@ -322,7 +322,7 @@
         pass
 
     def call(self, *args, **kwargs):
-        return self.func(*args, **kwargs)
+        return self.func(*args, **kwargs), {}
 
     def check_call_in_cache(self, *args, **kwargs):
         return False
@@ -476,8 +476,9 @@
 
         Returns
         -------
-        Output of the wrapped function if shelving is false, or a
-        MemorizedResult reference to the value if shelving is true.
+        output: Output of the wrapped function if shelving is false, or a
+            MemorizedResult reference to the value if shelving is true.
+        metadata: dict containing the metadata associated with the call.
         """
         args_id = self._get_args_id(*args, **kwargs)
         call_id = (self.func_id, args_id)
@@ -506,7 +507,7 @@
         # the cache.
         if self._is_in_cache_and_valid(call_id):
             if shelving:
-                return self._get_memorized_result(call_id)
+                return self._get_memorized_result(call_id), {}
 
             try:
                 start_time = time.time()
@@ -514,7 +515,7 @@
                 if self._verbose > 4:
                     self._print_duration(time.time() - start_time,
                                          context='cache loaded ')
-                return output
+                return output, {}
             except Exception:
                 # XXX: Should use an exception logger
                 _, signature = format_signature(self.func, *args, **kwargs)
@@ -527,6 +528,7 @@
                 f"in location {location}"
             )
 
+        # Returns the output but not the metadata
         return self._call(call_id, args, kwargs, shelving)
 
     @property
@@ -567,10 +569,12 @@
             class "NotMemorizedResult" is used when there is no cache
             activated (e.g. location=None in Memory).
         """
-        return self._cached_call(args, kwargs, shelving=True)
+        # Return the wrapped output, without the metadata
+        return self._cached_call(args, kwargs, shelving=True)[0]
 
     def __call__(self, *args, **kwargs):
-        return self._cached_call(args, kwargs, shelving=False)
+        # Return the output, without the metadata
+        return self._cached_call(args, kwargs, shelving=False)[0]
 
     def __getstate__(self):
         # Make sure self.func's source is introspected prior to being pickled -
@@ -752,11 +756,16 @@
         -------
         output : object
             The output of the function call.
+        metadata : dict
+            The metadata associated with the call.
         """
         call_id = (self.func_id, self._get_args_id(*args, **kwargs))
+
+        # Return the output and the metadata
         return self._call(call_id, args, kwargs)
 
     def _call(self, call_id, args, kwargs, shelving=False):
+        # Return the output and the metadata
         self._before_call(args, kwargs)
         start_time = time.time()
         output = self.func(*args, **kwargs)
@@ -774,13 +783,13 @@
             self._print_duration(duration)
         metadata = self._persist_input(duration, call_id, args, kwargs)
         if shelving:
-            return self._get_memorized_result(call_id, metadata)
+            return self._get_memorized_result(call_id, metadata), metadata
 
         if self.mmap_mode is not None:
             # Memmap the output at the first call to be consistent with
             # later calls
             output = self._load_item(call_id, metadata)
-        return output
+        return output, metadata
 
     def _persist_input(self, duration, call_id, args, kwargs,
                        this_duration_limit=0.5):
@@ -861,12 +870,14 @@
 ###############################################################################
 class AsyncMemorizedFunc(MemorizedFunc):
     async def __call__(self, *args, **kwargs):
-        out = super().__call__(*args, **kwargs)
-        return await out if asyncio.iscoroutine(out) else out
+        out = self._cached_call(args, kwargs, shelving=False)
+        out = await out if asyncio.iscoroutine(out) else out
+        return out[0]  # Don't return metadata
 
     async def call_and_shelve(self, *args, **kwargs):
-        out = super().call_and_shelve(*args, **kwargs)
-        return await out if asyncio.iscoroutine(out) else out
+        out = self._cached_call(args, kwargs, shelving=True)
+        out = await out if asyncio.iscoroutine(out) else out
+        return out[0]  # Don't return metadata
 
     async def call(self, *args, **kwargs):
         out = super().call(*args, **kwargs)
@@ -876,8 +887,9 @@
         self._before_call(args, kwargs)
         start_time = time.time()
         output = await self.func(*args, **kwargs)
-        return self._after_call(call_id, args, kwargs, shelving,
-                                output, start_time)
+        return self._after_call(
+            call_id, args, kwargs, shelving, output, start_time
+        )
 
 
 ###############################################################################
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/joblib-1.4.0/joblib/parallel.py 
new/joblib-1.4.2/joblib/parallel.py
--- old/joblib-1.4.0/joblib/parallel.py 2024-04-08 14:29:25.000000000 +0200
+++ new/joblib-1.4.2/joblib/parallel.py 2024-04-15 17:26:47.000000000 +0200
@@ -114,7 +114,7 @@
     parallel_(config/backend) context manager.
     """
     if param is not default_parallel_config[key]:
-        # param is explicitely set, return it
+        # param is explicitly set, return it
         return param
 
     if context_config[key] is not default_parallel_config[key]:
@@ -186,8 +186,8 @@
     uses_threads = getattr(backend, 'uses_threads', False)
     supports_sharedmem = getattr(backend, 'supports_sharedmem', False)
     # Force to use thread-based backend if the provided backend does not
-    # match the shared memory constraint or if the backend is not explicitely
-    # given and threads are prefered.
+    # match the shared memory constraint or if the backend is not explicitly
+    # given and threads are preferred.
     force_threads = (require == 'sharedmem' and not supports_sharedmem)
     force_threads |= (
         not explicit_backend and prefer == 'threads' and not uses_threads
@@ -199,7 +199,7 @@
             nesting_level=nesting_level
         )
         # Warn the user if we forced the backend to thread-based, while the
-        # user explicitely specified a non-thread-based backend.
+        # user explicitly specified a non-thread-based backend.
         if verbose >= 10 and explicit_backend:
             print(
                 f"Using {sharedmem_backend.__class__.__name__} as "
@@ -1473,7 +1473,7 @@
                     # a thread internal to the backend, register a task with
                     # an error that will be raised in the user's thread.
                     if isinstance(e.__context__, queue.Empty):
-                        # Supress the cause of the exception if it is
+                        # Suppress the cause of the exception if it is
                         # queue.Empty to avoid cluttered traceback. Only do it
                         # if the __context__ is really empty to avoid messing
                         # with causes of the original error.
@@ -1717,10 +1717,10 @@
                 yield result
 
     def _wait_retrieval(self):
-        """Return True if we need to continue retriving some tasks."""
+        """Return True if we need to continue retrieving some tasks."""
 
         # If the input load is still being iterated over, it means that tasks
-        # are still on the dispatch wait list and their results will need to
+        # are still on the dispatch waitlist and their results will need to
         # be retrieved later on.
         if self._iterating:
             return True
@@ -1782,7 +1782,7 @@
             error_job = next((job for job in self._jobs
                               if job.status == TASK_ERROR), None)
 
-        # If this error job exists, immediatly raise the error by
+        # If this error job exists, immediately raise the error by
         # calling get_result. This job might not exists if abort has been
         # called directly or if the generator is gc'ed.
         if error_job is not None:
@@ -1912,7 +1912,7 @@
 
         if n_jobs == 1:
             # If n_jobs==1, run the computation sequentially and return
-            # immediatly to avoid overheads.
+            # immediately to avoid overheads.
             output = self._get_sequential_output(iterable)
             next(output)
             return output if self.return_generator else list(output)
@@ -1934,7 +1934,7 @@
             # BatchCalls, that makes the loky executor use a temporary folder
             # specific to this Parallel object when pickling temporary memmaps.
             # This callback is necessary to ensure that several Parallel
-            # objects using the same resuable executor don't use the same
+            # objects using the same reusable executor don't use the same
             # temporary resources.
 
             def _batched_calls_reducer_callback():
@@ -2000,7 +2000,7 @@
 
         # The first item from the output is blank, but it makes the interpreter
         # progress until it enters the Try/Except block of the generator and
-        # reach the first `yield` statement. This starts the aynchronous
+        # reaches the first `yield` statement. This starts the asynchronous
         # dispatch of the tasks to the workers.
         next(output)
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/joblib-1.4.0/joblib/pool.py 
new/joblib-1.4.2/joblib/pool.py
--- old/joblib-1.4.0/joblib/pool.py     2024-04-08 14:26:43.000000000 +0200
+++ new/joblib-1.4.2/joblib/pool.py     2024-04-09 09:53:33.000000000 +0200
@@ -265,11 +265,11 @@
         Memmapping mode for numpy arrays passed to workers.
         See 'max_nbytes' parameter documentation for more details.
     forward_reducers: dictionary, optional
-        Reducers used to pickle objects passed from master to worker
+        Reducers used to pickle objects passed from main process to worker
         processes: see below.
     backward_reducers: dictionary, optional
         Reducers used to pickle return values from workers back to the
-        master process.
+        main process.
     verbose: int, optional
         Make it possible to monitor how the communication of numpy arrays
         with the subprocess is handled (pickling or memmapping)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/joblib-1.4.0/joblib/test/test_memory.py 
new/joblib-1.4.2/joblib/test/test_memory.py
--- old/joblib-1.4.0/joblib/test/test_memory.py 2024-04-08 14:29:25.000000000 
+0200
+++ new/joblib-1.4.2/joblib/test/test_memory.py 2024-05-02 14:11:23.000000000 
+0200
@@ -1412,12 +1412,6 @@
 class TestCacheValidationCallback:
     "Tests on parameter `cache_validation_callback`"
 
-    @pytest.fixture()
-    def memory(self, tmp_path):
-        mem = Memory(location=tmp_path)
-        yield mem
-        mem.clear()
-
     def foo(self, x, d, delay=None):
         d["run"] = True
         if delay is not None:
@@ -1491,3 +1485,42 @@
         assert d1["run"]
         assert not d2["run"]
         assert d3["run"]
+
+
+class TestMemorizedFunc:
+    "Tests for the MemorizedFunc and NotMemorizedFunc classes"
+
+    @staticmethod
+    def f(x, counter):
+        counter[x] = counter.get(x, 0) + 1
+        return counter[x]
+
+    def test_call_method_memorized(self, memory):
+        "Test calling the function"
+
+        f = memory.cache(self.f, ignore=['counter'])
+
+        counter = {}
+        assert f(2, counter) == 1
+        assert f(2, counter) == 1
+
+        x, meta = f.call(2, counter)
+        assert x == 2, "f has not been called properly"
+        assert isinstance(meta, dict), (
+            "Metadata are not returned by MemorizedFunc.call."
+        )
+
+    def test_call_method_not_memorized(self, memory):
+        "Test calling the function"
+
+        f = NotMemorizedFunc(self.f)
+
+        counter = {}
+        assert f(2, counter) == 1
+        assert f(2, counter) == 2
+
+        x, meta = f.call(2, counter)
+        assert x == 3, "f has not been called properly"
+        assert isinstance(meta, dict), (
+            "Metadata are not returned by MemorizedFunc.call."
+        )
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/joblib-1.4.0/joblib/test/test_memory_async.py 
new/joblib-1.4.2/joblib/test/test_memory_async.py
--- old/joblib-1.4.0/joblib/test/test_memory_async.py   2024-04-08 
14:29:25.000000000 +0200
+++ new/joblib-1.4.2/joblib/test/test_memory_async.py   2024-05-02 
10:05:25.000000000 +0200
@@ -147,3 +147,24 @@
         with raises(KeyError):
             result.get()
         result.clear()  # Do nothing if there is no cache.
+
+
+@pytest.mark.asyncio
+async def test_memorized_func_call_async(memory):
+
+    async def ff(x, counter):
+        await asyncio.sleep(0.1)
+        counter[x] = counter.get(x, 0) + 1
+        return counter[x]
+
+    gg = memory.cache(ff, ignore=['counter'])
+
+    counter = {}
+    assert await gg(2, counter) == 1
+    assert await gg(2, counter) == 1
+
+    x, meta = await gg.call(2, counter)
+    assert x == 2, "f has not been called properly"
+    assert isinstance(meta, dict), (
+        "Metadata are not returned by MemorizedFunc.call."
+    )
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/joblib-1.4.0/joblib/test/test_parallel.py 
new/joblib-1.4.2/joblib/test/test_parallel.py
--- old/joblib-1.4.0/joblib/test/test_parallel.py       2024-04-08 
14:29:20.000000000 +0200
+++ new/joblib-1.4.2/joblib/test/test_parallel.py       2024-04-09 
12:38:57.000000000 +0200
@@ -362,6 +362,28 @@
             UnpicklableObject()) for _ in range(10))
 
 
+@with_numpy
+@with_multiprocessing
+@parametrize('byteorder', ['<', '>', '='])
+def test_parallel_byteorder_corruption(byteorder):
+
+    def inspect_byteorder(x):
+        return x, x.dtype.byteorder
+
+    x = np.arange(6).reshape((2, 3)).view(f'{byteorder}i4')
+
+    initial_np_byteorder = x.dtype.byteorder
+
+    result = Parallel(n_jobs=2, backend='loky')(
+        delayed(inspect_byteorder)(x) for _ in range(3)
+    )
+
+    for x_returned, byteorder_in_worker in result:
+        assert byteorder_in_worker == initial_np_byteorder
+        assert byteorder_in_worker == x_returned.dtype.byteorder
+        np.testing.assert_array_equal(x, x_returned)
+
+
 @parametrize('backend', PARALLEL_BACKENDS)
 def test_parallel_timeout_success(backend):
     # Check that timeout isn't thrown when function is fast enough
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/joblib-1.4.0/joblib.egg-info/PKG-INFO 
new/joblib-1.4.2/joblib.egg-info/PKG-INFO
--- old/joblib-1.4.0/joblib.egg-info/PKG-INFO   2024-04-08 17:06:25.000000000 
+0200
+++ new/joblib-1.4.2/joblib.egg-info/PKG-INFO   2024-05-02 14:12:05.000000000 
+0200
@@ -1,6 +1,6 @@
 Metadata-Version: 2.1
 Name: joblib
-Version: 1.4.0
+Version: 1.4.2
 Summary: Lightweight pipelining with Python functions
 Author-email: Gael Varoquaux <gael.varoqu...@normalesup.org>
 License: BSD 3-Clause
@@ -33,7 +33,7 @@
    :target: https://badge.fury.io/py/joblib
    :alt: Joblib version
 
-.. |Azure| image:: 
https://dev.azure.com/joblib/joblib/_apis/build/status/joblib.joblib?branchName=master
+.. |Azure| image:: 
https://dev.azure.com/joblib/joblib/_apis/build/status/joblib.joblib?branchName=main
    :target: 
https://dev.azure.com/joblib/joblib/_build?definitionId=3&_a=summary&branchFilter=40
    :alt: Azure CI status
 
@@ -41,7 +41,7 @@
     :target: https://joblib.readthedocs.io/en/latest/?badge=latest
     :alt: Documentation Status
 
-.. |Codecov| image:: 
https://codecov.io/gh/joblib/joblib/branch/master/graph/badge.svg
+.. |Codecov| image:: 
https://codecov.io/gh/joblib/joblib/branch/main/graph/badge.svg
    :target: https://codecov.io/gh/joblib/joblib
    :alt: Codecov coverage
 
@@ -58,7 +58,7 @@
     git clone https://github.com/joblib/joblib.git
 
 If you don't have git installed, you can download a zip
-of the latest code: 
https://github.com/joblib/joblib/archive/refs/heads/master.zip
+of the latest code: 
https://github.com/joblib/joblib/archive/refs/heads/main.zip
 
 Installing
 ==========

Reply via email to