On Tue, 7 Mar 2023 at 16:53, Stephen Tucker wrote:
>
> Hi again,
>
> I tried xrange, but I got an error telling me that my integer was too big
> for a C long.
>
> Clearly, xrange in Py2 is not capable of dealing with Python (that is,
> possibly very long) integers.
That's because Py2 has two diff
Jon Ribbens via Python-list <
python-list@python.org> wrote:
> On 2023-03-02, Stephen Tucker wrote:
> > The range function in Python 2.7 (and yes, I know that it is now
> > superseded), provokes a Memory Error when asked to deiliver a very long
> > list of values.
> >
On 2023-03-02, Stephen Tucker wrote:
> The range function in Python 2.7 (and yes, I know that it is now
> superseded), provokes a Memory Error when asked to deiliver a very long
> list of values.
>
> I assume that this is because the function produces a list which it then
>
On Thu, 2 Mar 2023 at 22:27, Stephen Tucker wrote:
>
> Hi,
>
> The range function in Python 2.7 (and yes, I know that it is now
> superseded), provokes a Memory Error when asked to deiliver a very long
> list of values.
>
> I assume that this is because the function produ
On 2023-03-02 at 11:25:49 +,
Stephen Tucker wrote:
> The range function in Python 2.7 (and yes, I know that it is now
> superseded), provokes a Memory Error when asked to deiliver a very long
> list of values.
>
> I assume that this is because the function produces a lis
Hi,
The range function in Python 2.7 (and yes, I know that it is now
superseded), provokes a Memory Error when asked to deiliver a very long
list of values.
I assume that this is because the function produces a list which it then
iterates through.
1. Does the range function in Python 3.x
Barry
> Cc: MRAB ; Python-list@python.org
> Subject: Re: Why I fail so bad to check for memory leak with this code?
>
> On Fri, 22 Jul 2022 at 09:00, Barry wrote:
> > With code as complex as python’s there will be memory allocations that
> occur that will not be directly re
On Fri, 22 Jul 2022 at 09:00, Barry wrote:
> With code as complex as python’s there will be memory allocations that
occur that will not be directly related to the python code you test.
>
> To put it another way there is noise in your memory allocation signal.
>
> Usually the sig
ckle.dumps(iter([]))
>
> It's too strange. I found a bunch of true memory leaks with this
> decorator. It seems to be reliable. It's correct with pickle and with
> iter, but not when pickling iters.
With code as complex as python’s there will be memory allocations that occur
erage=28 B
It seems that, after 10 million loops, only 63 have a leak, with only
~3 KB. It seems to me that we can't call it a leak, no? Probably
pickle needs a lot more cycles to be sure there's actually a real leakage.
If it was a leak, then the amount of memory used or the co
I've done this other simple test:
#!/usr/bin/env python3
import tracemalloc
import gc
import pickle
tracemalloc.start()
snapshot1 = tracemalloc.take_snapshot().filter_traces(
(tracemalloc.Filter(True, __file__), )
)
for i in range(1000):
pickle.dumps(iter([]))
gc.collect()
snapsh
This naif code shows no leak:
import resource
import pickle
c = 0
while True:
pickle.dumps(iter([]))
if (c % 1) == 0:
max_rss = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
print(f"iteration: {c}, max rss: {max_rss} kb")
c += 1
--
https://mail.python.org/
On Thu, 21 Jul 2022 at 22:28, MRAB wrote:
>
> It's something to do with pickling iterators because it still occurs
> when I reduce func_76 to:
>
> @trace
> def func_76():
> pickle.dumps(iter([]))
It's too strange. I found a bunch of true memory leaks with
On 21/07/2022 20:47, Marco Sulla wrote:
I tried to check for memory leaks in a bunch of functions of mine using a
simple decorator. It works, but it fails with this code, returning a random
count_diff at every run. Why?
import tracemalloc
import gc
import functools
from uuid import uuid4
import
I tried to check for memory leaks in a bunch of functions of mine using a
simple decorator. It works, but it fails with this code, returning a random
count_diff at every run. Why?
import tracemalloc
import gc
import functools
from uuid import uuid4
import pickle
def getUuid():
return str
shared. For me, a character string
> can use any number of contiguous bytes in memory that some kind of pointer or
> table lookup provides access to.
>
> Clearly you could squeeze larger number in by not writing decimals but using
> hexadecimal notation that adds a to f as valid entri
Jen,
I would not be shocked at incompatibilities in the system described making it
hard to exchange anything, including text, but am not clear if there is a
limitation of four bytes in what can be shared. For me, a character string can
use any number of contiguous bytes in memory that some
> On 2 Feb 2022, at 18:19, Jen Kris via Python-list
> wrote:
>
> It's not clear to me from the struct module whether it can actually
> auto-detect endianness.
It is impossible to auto detect endian in the general case.
> I think it must be specified, just as I had to do with int.from_byte
On Wed, 2 Feb 2022 19:16:19 +0100 (CET), Jen Kris
declaimed the following:
>It's not clear to me from the struct module whether it can actually
>auto-detect endianness. I think it must be specified, just as I had to do
>with int.from_bytes(). In my case endianness was dictated by how the four
ily seen in the same order,
> or "1.234e3" or whatever?
>
> Obviously, if the mechanism is heavily used and multiple sides keep reading
> and even writing the same memory location, this is not ideal. But having
> different incompatible processors looking at the same memor
It's not clear to me from the struct module whether it can actually auto-detect
endianness. I think it must be specified, just as I had to do with
int.from_bytes(). In my case endianness was dictated by how the four bytes
were populated, starting with the zero bytes on the left.
Feb 1, 202
order, or
"1.234e3" or whatever?
Obviously, if the mechanism is heavily used and multiple sides keep reading and
even writing the same memory location, this is not ideal. But having different
incompatible processors looking at the same memory is also not.
-Original Message-
Fro
On Wed, 2 Feb 2022 00:40:22 +0100 (CET), Jen Kris
declaimed the following:
>
> breakup = int.from_bytes(byte_val, "big")
>print("this is breakup " + str(breakup))
>
>Python prints: this is breakup 32894
>
>Note that I had to switch from little endian to big endian. Python is little
>endian by
let me (and the
> list) know.
By using the struct module you can control for endian of the data.
Barry
>
> Thanks.
>
>
> Feb 1, 2022, 14:20 by ba...@barrys-emacs.org:
>
> On 1 Feb 2022, at 20:26, Jen Kris via Python-list
> wrote:
>
> I am using multipro
on-list
>> wrote:
>>
>> I am using multiprocesssing.shared_memory to pass data between NASM and
>> Python. The shared memory is created in NASM before Python is called.
>> Python connects to the shm: shm_00 =
>> shared_memory.SharedMemory(name='shm_obje
> On 1 Feb 2022, at 20:26, Jen Kris via Python-list
> wrote:
>
> I am using multiprocesssing.shared_memory to pass data between NASM and
> Python. The shared memory is created in NASM before Python is called.
> Python connects to the shm: shm_00 =
> shared_mem
I am using multiprocesssing.shared_memory to pass data between NASM and Python.
The shared memory is created in NASM before Python is called. Python connects
to the shm: shm_00 =
shared_memory.SharedMemory(name='shm_object_00',create=False).
I have used shared memory at other
> I had the (mis)pleasure of dealing with a multi-terabyte postgresql
> instance many years ago and figuring out why random scripts were eating
> up system memory became quite common.
>
> All of our "ETL" scripts were either written in Perl, Java, or Python
> but
On 3/29/21 5:12 AM, Alexey wrote:
Hello everyone!
I'm experiencing problems with memory consumption.
I have a class which is doing ETL job. What`s happening inside:
- fetching existing objects from DB via SQLAchemy
- iterate over raw data
- create new/update existing objects
- c
четверг, 1 апреля 2021 г. в 15:56:23 UTC+3, Marco Ippolito:
> > > Are you running with systemd?
> >
> > I really don't know.
> An example of how to check:
>
> ```
> $ readlink /sbin/init
> /lib/systemd/systemd
> ```
>
> You want to check which program runs as PID 1.
Thank you Marco
--
h
четверг, 1 апреля 2021 г. в 15:46:21 UTC+3, Marco Ippolito:
> I suspect the high watermark of `` needs to be reachable still and,
> secondly, that a forceful constraint whilst running would crash the
> container?
Exactly.
--
https://mail.python.org/mailman/listinfo/python-list
четверг, 1 апреля 2021 г. в 17:21:59 UTC+3, Mats Wichmann:
> On 4/1/21 5:50 AM, Alexey wrote:
> > Found it. As I said before the problem was lurking in the cache.
> > Few days ago I read about circular references and things like that and
> > I thought to myself that it might be the case. To buil
problem in practice?
> >>>
> >>> I can`t do that because it will affect other containers running on this
> >>> host.
> >>> In my opinion it may significantly reduce their performance.
> >>
> >> Assuming this a modern linux t
; and then you can simply reference
> self.__cache[cluster_name][database_name] to read or update the cache.
I agree
> Having that be more efficient than either self.__cache=None or del
> self.__cache (which will be equivalent), I can understand. But better
> than clearing the dict? S
On 4/1/21 5:50 AM, Alexey wrote:
Found it. As I said before the problem was lurking in the cache.
Few days ago I read about circular references and things like that and
I thought to myself that it might be the case. To build the cache I was
using lots of 'setdefault' methods chained together
s
affect other containers running on this
>>> host.
>>> In my opinion it may significantly reduce their performance.
>>
>> Assuming this a modern linux then you should have control groups that allow
>> you to set limits on memory and swap for each container.
>
> > Are you running with systemd?
>
> I really don't know.
An example of how to check:
```
$ readlink /sbin/init
/lib/systemd/systemd
```
You want to check which program runs as PID 1.
```
ps 1
```
--
https://mail.python.org/mailman/listinfo/python-list
> I'm sorry. I didn't understand your question right. If I have 4 workers,
> >>> they require 4Gb
> >>> in idle state and some extra memory when they execute other tasks. If I
> >>> increase workers
> >>> count up to
ignificantly reduce their performance.
>
> Assuming this a modern linux then you should have control groups that allow
> you to set limits on memory and swap for each container.
>
> Are you running with systemd?
If their problem is that their memory goes from `` to `` and then
back
On Thu, Apr 1, 2021 at 10:56 PM Alexey wrote:
>
> Found it. As I said before the problem was lurking in the cache.
> Few days ago I read about circular references and things like that and
> I thought to myself that it might be the case. To build the cache I was
> using lots of 'setdefault' methods
kers,
>>> they require 4Gb
>>> in idle state and some extra memory when they execute other tasks. If I
>>> increase workers
>>> count up to 16, they`ll eat all the memory I have (16GB) on my machine and
>>> will crash as soon
>>> as system
opinion if you're leaving the function scope you don't have to
worry about what variables you're leaving there. There are other guys at
StackOverflow who has similar problems with big dictionaries and memory
but they're using different approaches.
Thanks you everyone for your
On 31/03/2021 09:35, Alexey wrote:
среда, 31 марта 2021 г. в 01:20:06 UTC+3, Dan Stromberg:
What if you increase the machine's (operating system's) swap space? Does
that take care of the problem in practice?
I can`t do that because it will affect other containers running on this host.
In my
or small objects) stats to stderr.
> >> Try printing stats before and after 1st run, and after 2nd run. And
> >> post it in this thread if you can. (no sensible information in the
> >> stats).
> `glibc` has similar functions to monitor the memory allocation
> at th
, and after 2nd run. And
>> post it in this thread if you can. (no sensible information in the
>> stats).
`glibc` has similar functions to monitor the memory allocation
at the C level: `mallinfo[2]`, `malloc_stats`, `malloc_info`.
The `mallinfo` functions can be called via `ctypes`.
Prov
After second run:
> > # arenas allocated total = 63,635
> > # arenas reclaimed = 63,238
> > # arenas highwater mark = 10,114
> > # arenas allocated current = 397
> > 397 arenas * 262144 bytes/arena = 104,071,168
> OK, memory allocated by obmalloc is 61MB -> 9
= 10,114
> # arenas allocated current = 397
> 397 arenas * 262144 bytes/arena= 104,071,168
OK, memory allocated by obmalloc is 61MB -> 92MB -> 104MB.
Memory usage increasing, but it is much smaller than 1GB. 90% memory
is allocated b
среда, 31 марта 2021 г. в 11:52:43 UTC+3, Marco Ippolito:
> > > At which point does the problem start manifesting itself?
> > The problem spot is my cache(dict). I simplified my code to just load
> > all the objects to this dict and then clear it.
> What's the mem
среда, 31 марта 2021 г. в 06:54:52 UTC+3, Inada Naoki:
> First of all, I recommend upgrading your Python. Python 3.6 is a bit old.
I was thinking about that.
> As you saying, Python can not return the memory to OS until the whole
> arena become unused.
> If your task releases
среда, 31 марта 2021 г. в 05:45:27 UTC+3, cameron...@gmail.com:
> Since everyone is talking about vague OS memory use and not at all about
> working set size of Python objects, let me ...
> On 29Mar2021 03:12, Alexey wrote:
> >I'm experiencing problems with memory consumptio
> > At which point does the problem start manifesting itself?
> The problem spot is my cache(dict). I simplified my code to just load
> all the objects to this dict and then clear it.
What's the memory utilisation just _before_ performing this load? I am assuming
it's mu
среда, 31 марта 2021 г. в 01:20:06 UTC+3, Dan Stromberg:
> On Tue, Mar 30, 2021 at 1:25 AM Alexey wrote:
>
> >
> > I'm sorry. I didn't understand your question right. If I have 4 workers,
> > they require 4Gb
> > in idle state and some extra me
вторник, 30 марта 2021 г. в 18:43:54 UTC+3, Alan Gauld:
> On 29/03/2021 11:12, Alexey wrote:
> The first thing you really need to tell us is which
> OS you are using? Memory management varies wildly
> depending on OS. Even different flavours of *nix
> do it differently.
I
вторник, 30 марта 2021 г. в 18:43:51 UTC+3, Marco Ippolito:
> Have you tried to identify where in your code the surprising memory
> allocations
> are made?
Yes.
> You could "bisect search" by adding breakpoints:
>
> https://docs.python.org/3/library/functions.ht
On Mon, Mar 29, 2021 at 7:16 PM Alexey wrote:
>
> Problem. Before executing, my interpreter process weighs ~100Mb, after first
> run memory increases up to 500Mb
> and after second run it weighs 1Gb. If I will continue to run this class,
> memory wont increase, so I think
>
Since everyone is talking about vague OS memory use and not at all about
working set size of Python objects, let me ...
On 29Mar2021 03:12, Alexey wrote:
>I'm experiencing problems with memory consumption.
>
>I have a class which is doing ETL job. What`s happening inside:
> -
On Tue, Mar 30, 2021 at 1:25 AM Alexey wrote:
>
> I'm sorry. I didn't understand your question right. If I have 4 workers,
> they require 4Gb
> in idle state and some extra memory when they execute other tasks. If I
> increase workers
> count up to 16, they`ll eat all
On 30/03/2021 16:50, Chris Angelico wrote:
>> A 1GB process on modern computers is hardly a big problem?
>> Most machines have 4G and many have 16G or even 32G
>> nowadays.
>>
>
> Desktop systems maybe, but if you rent yourself a worker box, it might
> not have anything like that much. Especially
On Wed, Mar 31, 2021 at 2:44 AM Alan Gauld via Python-list
wrote:
>
> On 29/03/2021 11:12, Alexey wrote:
> > Hello everyone!
> > I'm experiencing problems with memory consumption.
> >
>
> The first thing you really need to tell us is which
> OS you are
Have you tried to identify where in your code the surprising memory allocations
are made?
You could "bisect search" by adding breakpoints:
https://docs.python.org/3/library/functions.html#breakpoint
At which point does the problem start manifesting itself?
--
https://mail.python.o
On 29/03/2021 11:12, Alexey wrote:
> Hello everyone!
> I'm experiencing problems with memory consumption.
>
The first thing you really need to tell us is which
OS you are using? Memory management varies wildly
depending on OS. Even different flavours of *nix
do it differently.
Ho
> I'm sorry. I didn't understand your question right. If I have 4 workers,
> they require 4Gb
> in idle state and some extra memory when they execute other tasks. If I
> increase workers
> count up to 16, they`ll eat all the memory I have (16GB) on my machine and
> wi
понедельник, 29 марта 2021 г. в 19:56:52 UTC+3, Stestagg:
> > > 2. Can you try a test with 16 or 32 active workers (i.e. number of
> > > workers=2x available memory in GB), do they all still end up with 1gb
> > > usage? or do you get any other memory-related issues r
ot sure if it was corrected.
> >>
> >> https://manhtai.github.io/posts/memory-leak-in-celery/
> >
> >As I mentioned in my first message, I tried to run
> >this task(class) via Flask API calls, without Celery.
> >And results are the same. Flask worker receives the
> > 2. Can you try a test with 16 or 32 active workers (i.e. number of
> > workers=2x available memory in GB), do they all still end up with 1gb
> > usage? or do you get any other memory-related issues running this?
> Yes. They will consume 1Gb each. It doesn't matter
Alexey wrote at 2021-3-29 06:26 -0700:
>понедельник, 29 марта 2021 г. в 15:57:43 UTC+3, Julio Oña:
>> It looks like the problem is on celery.
>> The mentioned issue is still open, so not sure if it was corrected.
>>
>> https://manhtai.github.io/posts/memory-leak-in-cele
понедельник, 29 марта 2021 г. в 17:19:02 UTC+3, Stestagg:
> On Mon, Mar 29, 2021 at 2:32 PM Alexey wrote:
> Some questions here to help understand more:
>
> 1. Do you have any actual problems caused by running 8 celery workers
> (beyond high memory reports)? What are they?
On Mon, Mar 29, 2021 at 2:32 PM Alexey wrote:
> понедельник, 29 марта 2021 г. в 15:57:43 UTC+3, Julio Oña:
> > It looks like the problem is on celery.
> > The mentioned issue is still open, so not sure if it was corrected.
> >
> > https://manhtai.github.io/posts/memo
понедельник, 29 марта 2021 г. в 15:57:43 UTC+3, Julio Oña:
> It looks like the problem is on celery.
> The mentioned issue is still open, so not sure if it was corrected.
>
> https://manhtai.github.io/posts/memory-leak-in-celery/
As I mentioned in my first message, I tried to r
It looks like the problem is on celery.
The mentioned issue is still open, so not sure if it was corrected.
https://manhtai.github.io/posts/memory-leak-in-celery/
Julio
El lun, 29 de mar. de 2021 a la(s) 08:31, Alexey (zen.supag...@gmail.com)
escribió:
> Hello Lars!
> Thanks for your in
Hello Lars!
Thanks for your interest.
The problem appears when all celery workers
require 1Gb of RAM each in idle state. They
hold this memory constantly and when they do
something useful, they grab more memory. I
think 8Gb+ in idle state is quite a lot for my
app.
> Did it crash your system
Hello Alexej,
May I stupidly ask, why you care about that in general? Please don't get
me wrong I don't want to criticize you, this is rather meant to be a
(thought) provoking question.
Normally your OS-Kernel and the Python-Interpreter get along pretty well
and whenthere is free memory
Hello everyone!
I'm experiencing problems with memory consumption.
I have a class which is doing ETL job. What`s happening inside:
- fetching existing objects from DB via SQLAchemy
- iterate over raw data
- create new/update existing objects
- commit changes
Before processing data I c
Pasha Stetsenko wrote at 2020-10-23 11:32 -0700:
> ...
> static int my_init(PyObject*, PyObject*, PyObject*) { return 0; }
> static void my_dealloc(PyObject*) {}
I think, the `dealloc` function is responsible to actually
free the memory area.
I see for example:
static void
Spec_dea
Thanks MRAB, this was it.
I guess I was thinking about tp_dealloc as a C++ destructor, where the base
class' destructor is called automatically.
On Fri, Oct 23, 2020 at 11:59 AM MRAB wrote:
> On 2020-10-23 19:32, Pasha Stetsenko wrote:
> > Thanks for all the replies!
> > Following Chris's advice
On 2020-10-23 19:32, Pasha Stetsenko wrote:
Thanks for all the replies!
Following Chris's advice, I tried to reduce the code to the smallest
reproducible example (I guess I should have done it sooner),
but here's what I came up with:
```
#include
#include
static int my_init(PyObject*,
type()` function creates a custom type and attaches it to a
module. After this, creating 100M instances of the object will cause the
process memory to swell to 1.5G:
```
for i in range(10**8):
z = dt.mytype()
```
I know this is not normal because if instead i used a builtin type such as
`list`, or
Pasha Stetsenko wrote at 2020-10-22 17:51 -0700:
> ...
>I'm a maintainer of a python library "datatable" (can be installed from
>PyPi), and i've been recently trying to debug a memory leak that occurs in
>my library.
>The program that exposes the leak is quite s
> On Oct 22, 2020, at 5:51 PM, Pasha Stetsenko wrote:
>
> Dear Python gurus,
>
> I'm a maintainer of a python library "datatable" (can be installed from
> PyPi), and i've been recently trying to debug a memory leak that occurs in
> my library.
>
On Fri, Oct 23, 2020 at 12:20 PM Pasha Stetsenko wrote:
> I'm currently not sure where to go from here. Is there something wrong with
> my python object that prevents it from being correctly processed by the
> Python runtime? Because this doesn't seem to be the usual case of
> incrementing the ref
Dear Python gurus,
I'm a maintainer of a python library "datatable" (can be installed from
PyPi), and i've been recently trying to debug a memory leak that occurs in
my library.
The program that exposes the leak is quite simple:
```
import datatable as dt
import gc # just in
On Fri, Oct 23, 2020 at 3:35 AM Grant Edwards wrote:
> Moving from 2.x to 3.x isn't too bad, but trying to maintain
> compatiblity with both is painful. At this point, I would probably
> just abandon 2.x.
>
Definitely. No point trying to support both when you're starting with
code from a Py3 exam
On 2020-10-22, Chris Angelico wrote:
> On Fri, Oct 23, 2020 at 12:15 AM Shaozhong SHI wrote:
>> What should I know or watch out if I decide to move from Python 2.7
>> to Python 3?
>
> Key issues? Well, for starters, you don't have to worry about whether
> your strings are Unicode or not. They ju
On Fri, Oct 23, 2020 at 12:15 AM Shaozhong SHI wrote:
>
> Thanks, Chris.
>
> What should I know or watch out if I decide to move from Python 2.7 to Python
> 3?
>
> What are the key issues? Syntax?
>
Keep it on-list please :)
Key issues? Well, for starters, you don't have to worry about whether
Never worked with ZFS, it sounds interesting. Anyway, IMHO it's much more
simple to save to disk also for debugging: you have not to extract the data
from the db if you need to inspect them.
On Thu, 22 Oct 2020 at 14:39, D'Arcy Cain wrote:
> On 10/22/20 7:23 AM, Marco Sulla wrote:
> > I would ad
On 10/22/20 7:23 AM, Marco Sulla wrote:
I would add that usually I do not recommend saving files on databases. I
usually save the file on the disk and the path and mime on a dedicated
table.
I used to do that because backing up the database became huge. Now I use
ZFS snapshots with send/recei
On Thu, Oct 22, 2020 at 8:28 PM Shaozhong SHI wrote:
>
> I found this last option is very interesting.
>
> Saving the dataframe to memory using StringIO
>
> https://naysan.ca/2020/06/21/pandas-to-postgresql-using-psycopg2-copy_from/
>
> But, testing shows
> unicode
I would add that usually I do not recommend saving files on databases. I
usually save the file on the disk and the path and mime on a dedicated
table.
--
https://mail.python.org/mailman/listinfo/python-list
Try to save it in a binary field on PG using hdf5:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_hdf.html
On Thu, 22 Oct 2020 at 11:29, Shaozhong SHI wrote:
> I found this last option is very interesting.
>
> Saving the dataframe to memory using
I found this last option is very interesting.
Saving the dataframe to memory using StringIO
https://naysan.ca/2020/06/21/pandas-to-postgresql-using-psycopg2-copy_from/
But, testing shows
unicode argument expected, got 'str'
Any working example for getting DataFrame into a Postgr
ersion, and
> managing that memory, but the original pointer ends up being lost and
> leaking memory.
Yes, the problem with my code I posted is that resource.getrusage report the
Python allocated memory. Once I've changed to
psutil.Process().memory_info().rss I see the memo
On 6/7/20 2:25 PM, Barry wrote:
>> Does ctypes, when using restype, frees allocated memory?
>>
>> For example, will the memory allocated by "strdup" be freed after the "del"
>> statement? If not, how can I free it?
>
> See https://linux.die.net/m
> On 7 Jun 2020, at 14:23, Miki Tebeka wrote:
>
> Hi,
>
> Does ctypes, when using restype, frees allocated memory?
>
> For example, will the memory allocated by "strdup" be freed after the "del"
> statement? If not, how can I free it?
See https
On 6/7/20 7:15 AM, Miki Tebeka wrote:
> Hi,
>
> Does ctypes, when using restype, frees allocated memory?
>
> For example, will the memory allocated by "strdup" be freed after the "del"
> statement? If not, how can I free it?
I don't think so. I did
> Does ctypes, when using restype, frees allocated memory?
>
> For example, will the memory allocated by "strdup" be freed after the "del"
> statement? If not, how can I free it?
I've tried the following program and I'm more confused now :) Can anyo
Hi,
Does ctypes, when using restype, frees allocated memory?
For example, will the memory allocated by "strdup" be freed after the "del"
statement? If not, how can I free it?
---
import ctypes
libc = ctypes.cdll.LoadLibrary('libc.so.6')
strdup = libc.strdup
strd
On 2020-05-29 14:28:59 +0900, Inada Naoki wrote:
> pymalloc manages only small blocks of memory.
> Large (more than 512 byte) memory blocks are managed by malloc/free.
>
> glibc malloc doesn't return much freed memory to OS.
That depends on what "much" means.
Glibc d
pymalloc manages only small blocks of memory.
Large (more than 512 byte) memory blocks are managed by malloc/free.
glibc malloc doesn't return much freed memory to OS.
You can try jemalloc instead of glibc.
On Ubuntu 20.04, you can try it by:
LD_PRELOAD=/usr/lib/x86_64-linu
rmli...@riseup.net wrote at 2020-5-28 18:56 -0700:
>We just ran into this problem when running our aiootp package's memory
>hard password hashing function (https://github.com/rmlibre/aiootp/). The
>memory was not being cleared after the function finished running but the
>script w
On Fri, May 29, 2020 at 12:08 PM wrote:
>
>
> We just ran into this problem when running our aiootp package's memory
> hard password hashing function (https://github.com/rmlibre/aiootp/).
Have you considered implementing that module in something else? Try
Cythonizing it and see
1 - 100 of 2961 matches
Mail list logo