Amit Kapila writes:
> On Tue, Jun 6, 2017 at 10:14 PM, Tom Lane wrote:
>> By definition, the address range we're trying to reuse worked successfully
>> in the postmaster process. I don't see how forcing a specific address
>> could do anything but
On Tue, Jun 6, 2017 at 10:14 PM, Tom Lane wrote:
> Robert Haas writes:
>> I think the idea of retrying process creation (and I definitely agree
>> with Tom and Magnus that we have to retry process creation, not just
>> individual mappings) is a good
On Tue, Jun 6, 2017 at 1:27 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Tue, Jun 6, 2017 at 12:44 PM, Tom Lane wrote:
>>> By definition, the address range we're trying to reuse worked successfully
>>> in the postmaster process.
Robert Haas writes:
> On Tue, Jun 6, 2017 at 12:44 PM, Tom Lane wrote:
>> By definition, the address range we're trying to reuse worked successfully
>> in the postmaster process. I don't see how forcing a specific address
>> could do anything but
On Tue, Jun 6, 2017 at 12:44 PM, Tom Lane wrote:
> By definition, the address range we're trying to reuse worked successfully
> in the postmaster process. I don't see how forcing a specific address
> could do anything but create an additional risk of postmaster startup
>
Robert Haas writes:
> I think the idea of retrying process creation (and I definitely agree
> with Tom and Magnus that we have to retry process creation, not just
> individual mappings) is a good place to start. Now if we find that we
> are having to retry frequently, then
On Mon, Jun 5, 2017 at 10:10 PM, Amit Kapila wrote:
> Agreed. By the way, while browsing about this problem, I found that
> one other open source (nginx) has used a solution similar to what
> Andres was proposing upthread to solve this problem. Refer:
>
On Mon, Jun 5, 2017 at 7:26 PM, Tom Lane wrote:
> Amit Kapila writes:
>> Sure. I think it is slightly tricky because specs don't say clearly
>> how ASLR can impact the behavior of any API and in my last attempt I
>> could not reproduce the issue.
>
Amit Kapila writes:
> Sure. I think it is slightly tricky because specs don't say clearly
> how ASLR can impact the behavior of any API and in my last attempt I
> could not reproduce the issue.
> I can try to do basic verification with the patch you have proposed,
> but
On Mon, Jun 5, 2017 at 1:35 PM, Amit Kapila wrote:
> On Mon, Jun 5, 2017 at 4:58 PM, Magnus Hagander
> wrote:
> > On Mon, Jun 5, 2017 at 1:16 PM, Amit Kapila
> wrote:
> >>
> >> On Mon, Jun 5, 2017 at 9:15 AM, Tom Lane
On Mon, Jun 5, 2017 at 4:58 PM, Magnus Hagander wrote:
> On Mon, Jun 5, 2017 at 1:16 PM, Amit Kapila wrote:
>>
>> On Mon, Jun 5, 2017 at 9:15 AM, Tom Lane wrote:
>> > Amit Kapila writes:
>> >
>> >> I
On Mon, Jun 5, 2017 at 1:16 PM, Amit Kapila wrote:
> On Mon, Jun 5, 2017 at 9:15 AM, Tom Lane wrote:
> > Amit Kapila writes:
> >
> >> I think the same problem can happen during reattach as well.
> >> Basically,
On Mon, Jun 5, 2017 at 9:15 AM, Tom Lane wrote:
> Amit Kapila writes:
>
>> I think the same problem can happen during reattach as well.
>> Basically, MapViewOfFileEx can fail to load image at predefined
>> address (UsedShmemSegAddr).
>
> Once we've
Amit Kapila writes:
> On Mon, Jun 5, 2017 at 4:00 AM, Tom Lane wrote:
>> I took a quick look at this, and it seems rather beside the point.
> What I understood from the randomization shm allocation behavior due
> to ASLR is that when we try to
On Mon, Jun 5, 2017 at 4:00 AM, Tom Lane wrote:
> Amit Kapila writes:
>> Okay, I have added the comment to explain the same. I have also
>> modified the patch to adjust the looping as per your suggestion.
>
> I took a quick look at this, and it seems
Amit Kapila writes:
> Okay, I have added the comment to explain the same. I have also
> modified the patch to adjust the looping as per your suggestion.
I took a quick look at this, and it seems rather beside the point.
You can't just loop inside an already-forked
On Fri, Jun 2, 2017 at 7:20 PM, Petr Jelinek
wrote:
> On 02/06/17 15:37, Amit Kapila wrote:
>>
>> No, it is to avoid calling free of memory which is not reserved on
>> retry. See the comment:
>> + * On the first try, release memory region reservation that was made
On 02/06/17 15:37, Amit Kapila wrote:
> On Thu, Jun 1, 2017 at 10:36 PM, Petr Jelinek
> wrote:
>> On 01/06/17 15:25, Tom Lane wrote:
>>> Robert Haas writes:
So, are you going to, perhaps, commit this? Or who is picking this up?
>>>
On Thu, Jun 1, 2017 at 10:36 PM, Petr Jelinek
wrote:
> On 01/06/17 15:25, Tom Lane wrote:
>> Robert Haas writes:
>>> So, are you going to, perhaps, commit this? Or who is picking this up?
>>
>>> /me knows precious little about Windows.
>>
>>
On Fri, May 26, 2017 at 05:50:45PM +0530, Amit Kapila wrote:
> On Fri, May 26, 2017 at 5:30 AM, Tsunakawa, Takayuki
> wrote:
> > I guessed that the reason Noah suggested 1 - 5 seconds of retry is based
> > on the expectation that the address space might be freed
On 01/06/17 15:25, Tom Lane wrote:
> Robert Haas writes:
>> So, are you going to, perhaps, commit this? Or who is picking this up?
>
>> /me knows precious little about Windows.
>
> I'm not going to be the one to commit this either, but seems like someone
> should.
>
Robert Haas writes:
> So, are you going to, perhaps, commit this? Or who is picking this up?
> /me knows precious little about Windows.
I'm not going to be the one to commit this either, but seems like someone
should.
regards, tom lane
--
Sent
On Fri, May 26, 2017 at 10:51 AM, Magnus Hagander wrote:
> I would definitely suggest putting it in HEAD (and thus, v10) for a while to
> get some real world exposure before backpatching. But if it does work out
> well in the end, then we can certainly consider backpatching
On Fri, May 26, 2017 at 8:21 PM, Magnus Hagander wrote:
>
> On Fri, May 26, 2017 at 8:24 AM, Michael Paquier
> wrote:
>>
>> On Fri, May 26, 2017 at 8:20 AM, Amit Kapila
>> wrote:
>> > I think the real question here is,
On Fri, May 26, 2017 at 8:24 AM, Michael Paquier
wrote:
> On Fri, May 26, 2017 at 8:20 AM, Amit Kapila
> wrote:
> > I think the real question here is, shall we backpatch this fix or we
> > want to do this just in Head or we want to consider it
On Fri, May 26, 2017 at 8:20 AM, Amit Kapila wrote:
> I think the real question here is, shall we backpatch this fix or we
> want to do this just in Head or we want to consider it as a new
> feature for PostgreSQL-11. I think it should be fixed in Head and the
> change
On Fri, May 26, 2017 at 5:30 AM, Tsunakawa, Takayuki
wrote:
> From: pgsql-hackers-ow...@postgresql.org
>> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Amit Kapila
>> Yes, I also share this opinion, the shm attach failures are due to
>> randomization
On Thu, May 25, 2017 at 8:01 PM, Tom Lane wrote:
> Amit Kapila writes:
>> Yes, I also share this opinion, the shm attach failures are due to
>> randomization behavior, so sleep won't help much. So, I will change
>> the patch to use 100 retries unless
From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Amit Kapila
> Yes, I also share this opinion, the shm attach failures are due to
> randomization behavior, so sleep won't help much. So, I will change the
> patch to use 100 retries unless people
Amit Kapila writes:
> Yes, I also share this opinion, the shm attach failures are due to
> randomization behavior, so sleep won't help much. So, I will change
> the patch to use 100 retries unless people have other opinions.
Sounds about right to me.
On Thu, May 25, 2017 at 8:41 AM, Noah Misch wrote:
> On Thu, May 25, 2017 at 11:41:19AM +0900, Michael Paquier wrote:
>
>> Indeed, pgrename() does so with a 100ms sleep time between each
>> iteration. Perhaps we could do that and limit to 50 iterations?
>
> pgrename() is
On Thu, May 25, 2017 at 11:41:19AM +0900, Michael Paquier wrote:
> On Thu, May 25, 2017 at 11:34 AM, Tsunakawa, Takayuki
> wrote:
> > From: pgsql-hackers-ow...@postgresql.org
> >> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Noah Misch
> >> Ten feels
On Thu, May 25, 2017 at 11:34 AM, Tsunakawa, Takayuki
wrote:
> From: pgsql-hackers-ow...@postgresql.org
>> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Noah Misch
>> Ten feels low to me. The value should be be low enough so users don't give
>> up and
From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Noah Misch
> Ten feels low to me. The value should be be low enough so users don't give
> up and assume a permanent hang, but there's little advantage to making it
> lower.
> I'd set it such that
On Wed, May 24, 2017 at 09:29:11AM -0400, Michael Paquier wrote:
> On Tue, May 23, 2017 at 8:14 AM, Amit Kapila wrote:
> > So it seems both you and Tom are leaning towards some sort of retry
> > mechanism for shm reattach on Windows. I also think that is a viable
> >
On Wed, May 24, 2017 at 6:59 PM, Michael Paquier
wrote:
> On Tue, May 23, 2017 at 8:14 AM, Amit Kapila wrote:
>> So it seems both you and Tom are leaning towards some sort of retry
>> mechanism for shm reattach on Windows. I also think that is
On Tue, May 23, 2017 at 8:14 AM, Amit Kapila wrote:
> So it seems both you and Tom are leaning towards some sort of retry
> mechanism for shm reattach on Windows. I also think that is a viable
> option to negate the impact of ASLR. Attached patch does that. Note
>
On Sat, May 20, 2017 at 5:56 PM, Noah Misch wrote:
> On Sat, Apr 15, 2017 at 02:30:18PM -0700, Andres Freund wrote:
>> On 2017-04-15 17:24:54 -0400, Tom Lane wrote:
>> > Andres Freund writes:
>> > > On 2017-04-15 17:09:38 -0400, Tom Lane wrote:
>> > >> Why
On Sat, Apr 15, 2017 at 02:30:18PM -0700, Andres Freund wrote:
> On 2017-04-15 17:24:54 -0400, Tom Lane wrote:
> > Andres Freund writes:
> > > On 2017-04-15 17:09:38 -0400, Tom Lane wrote:
> > >> Why doesn't Windows' ability to map the segment into the new process
> > >>
On Mon, May 1, 2017 at 10:23 AM, Tom Lane wrote:
> Amit Kapila writes:
>> Yeah, that's right. Today, I have spent some time to analyze how and
>> where retry logic is required. I think there are two places where we
>> need this retry logic, one is
Amit Kapila writes:
> Yeah, that's right. Today, I have spent some time to analyze how and
> where retry logic is required. I think there are two places where we
> need this retry logic, one is if we fail to reserve the memory
> (pgwin32_ReserveSharedMemoryRegion) and
On Tue, Apr 25, 2017 at 7:37 PM, Tom Lane wrote:
> Craig Ringer writes:
>> On 25 Apr. 2017 13:37, "Heikki Linnakangas" wrote:
>>> For some data shared memory structures, that store no pointers, we wouldn't
>>> need to insist that they
On 25 April 2017 at 22:07, Tom Lane wrote:
> Craig Ringer writes:
>> On 25 Apr. 2017 13:37, "Heikki Linnakangas" wrote:
>>> For some data shared memory structures, that store no pointers, we wouldn't
>>> need to insist that they are
Craig Ringer writes:
> On 25 Apr. 2017 13:37, "Heikki Linnakangas" wrote:
>> For some data shared memory structures, that store no pointers, we wouldn't
>> need to insist that they are mapped to the same address in every backend,
>> though. In particular,
On 25 Apr. 2017 13:37, "Heikki Linnakangas" wrote:
For some data shared memory structures, that store no pointers, we wouldn't
need to insist that they are mapped to the same address in every backend,
though. In particular, shared_buffers. It wouldn't eliminate the problem,
On 04/24/2017 09:50 PM, Andres Freund wrote:
On 2017-04-24 14:43:11 -0400, Tom Lane wrote:
(We have accepted that kind of overhead for DSM segments, but the
intention I think is to allow only very trivial data structures in
the DSM segments. Losing compiler pointer type checking for data
On 25 Apr. 2017 02:51, "Andres Freund" wrote:
On 2017-04-24 11:08:48 -0700, Andres Freund wrote:
> On 2017-04-24 23:14:40 +0800, Craig Ringer wrote:
> > In the long run we'll probably be forced toward threading or far
pointers.
>
> I'll vote for removing the windows port,
On 2017-04-24 11:08:48 -0700, Andres Freund wrote:
> On 2017-04-24 23:14:40 +0800, Craig Ringer wrote:
> > In the long run we'll probably be forced toward threading or far pointers.
>
> I'll vote for removing the windows port, before going for that. And I'm
> not even joking.
Just to clarify:
On 2017-04-24 14:43:11 -0400, Tom Lane wrote:
> (We have accepted that kind of overhead for DSM segments, but the
> intention I think is to allow only very trivial data structures in
> the DSM segments. Losing compiler pointer type checking for data
> structures like the lock or PGPROC tables
Andres Freund writes:
> On 2017-04-24 23:14:40 +0800, Craig Ringer wrote:
>> In the long run we'll probably be forced toward threading or far pointers.
> I'll vote for removing the windows port, before going for that. And I'm
> not even joking.
Me too. We used to *have*
Hi,
On 2017-04-24 14:25:34 +0530, Amit Kapila wrote:
> Error code 87 means "invalid parameter". Some googling [1] indicates
> such an error occurs if we pass the out-of-range address to
> MapViewOfFileEx. Another possible theory is that we must pass the
> address as multiple of the system's
On 24 April 2017 at 16:55, Amit Kapila wrote:
> Another thing I have tried is to just start the server by setting
> RandomizedBaseAddress="TRUE". I have tried about 15-20 times but
> could not reproduce the problem related to shared memory attach. We
> have tried the
On 16 April 2017 at 05:18, Andres Freund wrote:
> Because of ASLR of the main executable (i.e. something like PIE). It'll
> supposedly become harder (as in only running in compatibility modes) if
> binaries don't enable that. It's currently disabled somewhere in the VC
>
On Fri, Apr 21, 2017 at 12:55 AM, Andres Freund wrote:
> On 2017-04-20 16:57:03 +0530, Amit Kapila wrote:
>> On Wed, Apr 19, 2017 at 9:01 PM, Andres Freund wrote:
>> > On 2017-04-19 10:15:31 -0400, Tom Lane wrote:
>> >> Amit Kapila
Andres Freund writes:
> On 2017-04-20 16:57:03 +0530, Amit Kapila wrote:
>> Agreed. I have done some further study by using VMMap tool in Windows
>> and it seems to me that all 64-bit processes use address range
>> (0001 ~ 07FE). I have attached two
On 2017-04-20 16:57:03 +0530, Amit Kapila wrote:
> On Wed, Apr 19, 2017 at 9:01 PM, Andres Freund wrote:
> > On 2017-04-19 10:15:31 -0400, Tom Lane wrote:
> >> Amit Kapila writes:
> >> > On Sun, Apr 16, 2017 at 3:04 AM, Tom Lane
On 2017-04-19 10:15:31 -0400, Tom Lane wrote:
> Amit Kapila writes:
> > On Sun, Apr 16, 2017 at 3:04 AM, Tom Lane wrote:
> >> Obviously, any such fix would be a lot more likely to be reliable in
> >> 64-bit machines. There's probably not enough
Amit Kapila writes:
> On Sun, Apr 16, 2017 at 3:04 AM, Tom Lane wrote:
>> Obviously, any such fix would be a lot more likely to be reliable in
>> 64-bit machines. There's probably not enough daylight to be sure of
>> making it work in 32-bit Windows,
On Sun, Apr 16, 2017 at 3:04 AM, Tom Lane wrote:
> Andres Freund writes:
>> On 2017-04-15 17:24:54 -0400, Tom Lane wrote:
>>> I wonder whether we could work around that by just destroying the created
>>> process and trying again if we get a collision.
On 2017-04-15 14:34:28 -0700, Andres Freund wrote:
> On 2017-04-15 17:30:21 -0400, Tom Lane wrote:
> > Andres Freund writes:
> > > On 2017-04-15 16:48:05 -0400, Tom Lane wrote:
> > >> Concretely, I propose the attached patch. We'd have to put it into
> > >> all supported
On 2017-04-15 17:30:21 -0400, Tom Lane wrote:
> Andres Freund writes:
> > On 2017-04-15 16:48:05 -0400, Tom Lane wrote:
> >> Concretely, I propose the attached patch. We'd have to put it into
> >> all supported branches, since culicidae is showing intermittent
> >> "could not
Andres Freund writes:
> On 2017-04-15 17:24:54 -0400, Tom Lane wrote:
>> I wonder whether we could work around that by just destroying the created
>> process and trying again if we get a collision. It'd be a tad
>> inefficient, but hopefully collisions wouldn't happen often
Andres Freund writes:
> On 2017-04-15 16:48:05 -0400, Tom Lane wrote:
>> Concretely, I propose the attached patch. We'd have to put it into
>> all supported branches, since culicidae is showing intermittent
>> "could not reattach to shared memory" failures in all the
On 2017-04-15 17:24:54 -0400, Tom Lane wrote:
> Andres Freund writes:
> > On 2017-04-15 17:09:38 -0400, Tom Lane wrote:
> >> Why doesn't Windows' ability to map the segment into the new process
> >> before it executes take care of that?
>
> > Because of ASLR of the main
Andres Freund writes:
> On 2017-04-15 17:09:38 -0400, Tom Lane wrote:
>> Why doesn't Windows' ability to map the segment into the new process
>> before it executes take care of that?
> Because of ASLR of the main executable (i.e. something like PIE).
Not following. Are you
On 2017-04-15 17:09:38 -0400, Tom Lane wrote:
> Andres Freund writes:
> > That seems quite reasonable. I'm afraid we're going to have to figure
> > out something similar, but more robust, for windows soon-ish :/
>
> Why doesn't Windows' ability to map the segment into the
Andres Freund writes:
> That seems quite reasonable. I'm afraid we're going to have to figure
> out something similar, but more robust, for windows soon-ish :/
Why doesn't Windows' ability to map the segment into the new process
before it executes take care of that?
> As a
On 2017-04-15 16:48:05 -0400, Tom Lane wrote:
> I wrote:
> > I think what may be the most effective way to proceed is to provide
> > a way to force the shmem segment to be mapped at a chosen address.
> > It looks like, at least on x86_64 Linux, mapping shmem at
> > 0x7E00 would work
I wrote:
> I think what may be the most effective way to proceed is to provide
> a way to force the shmem segment to be mapped at a chosen address.
> It looks like, at least on x86_64 Linux, mapping shmem at
> 0x7E00 would work reliably.
> Since we only care about this for testing
Andres Freund writes:
> On April 14, 2017 9:42:41 PM PDT, Tom Lane wrote:
>> 2017-04-15 04:31:21.657 GMT [16792] FATAL: could not reattach to
>> shared memory (key=6280001, addr=0x7f692fece000): Invalid argument
>>
>> Presumably, this is the same issue
On April 14, 2017 9:42:41 PM PDT, Tom Lane wrote:
>Per
>https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae=2017-04-15%2004%3A00%3A02
>
>2017-04-15 04:31:21.657 GMT [16792] FATAL: could not reattach to
>shared memory (key=6280001, addr=0x7f692fece000): Invalid
71 matches
Mail list logo