Re: cross-platform pg_basebackup

2022-10-21 Thread davinder singh
Hi,
Patch v2 looks good to me, I have tested it, and pg_basebackup works fine
across the platforms (Windows to Linux and Linux to Windows).
Syntax used for testing
$ pg_basebackup -h remote_server_ip -p 5432 -U user_name -D backup/data -T
olddir=newdir

I have also tested with non-absolute paths, it behaves as expected.

On Fri, Oct 21, 2022 at 12:42 AM Andrew Dunstan  wrote:

>
> On 2022-10-20 Th 14:47, Robert Haas wrote:
> > On Thu, Oct 20, 2022 at 1:28 PM Tom Lane  wrote:
> >> Robert Haas  writes:
> >>> Cool. Here's a patch.
> >> LGTM, except I'd be inclined to ensure that all the macros
> >> are function-style, ie
> >>
> >> +#define IS_DIR_SEP(ch) IS_NONWINDOWS_DIR_SEP(ch)
> >>
> >> not just
> >>
> >> +#define IS_DIR_SEP IS_NONWINDOWS_DIR_SEP
> >>
> >> I don't recall the exact rules, but I know that the second style
> >> can lead to expanding the macro in more cases, which we likely
> >> don't want.  It also seems like better documentation to show
> >> the expected arguments.
> > OK, thanks. v2 attached.
> >
>
>
> Looks good.
>
>
> cheers
>
>
> andrew
>
> --
> Andrew Dunstan
> EDB: https://www.enterprisedb.com
>
>
>
>

-- 
Regards,
Davinder
EnterpriseDB: http://www.enterprisedb.com


Re: Optimize external TOAST storage

2022-03-15 Thread davinder singh
Thanks, Nathan, for the review comments. Please find the updated patch.
On Sun, Mar 13, 2022 at 3:43 AM Nathan Bossart 
wrote:

> Do you think it is worth making this configurable?  I don't think it is
> outside the realm of possibility for some users to care more about disk
> space than read performance.  And maybe the threshold for this optimization
> could differ based on the workload.
>

I think here we can break it into two parts.
The first part if the user cares about the disk more than reading
performance, disable this?
That is a good idea, we can try this, lets see what others say.

Regarding the 2nd part of configuring the threshold, Based on our
experiments, we have fixed it for the attributes with size > 2 * chunk_size.
The default chunk_size is 2KB and the page size is 8KB. While toasting each
attribute is divided into chunks, and each page can hold a max of 4 such
chunks.
We only need to think about the space used by the last chunk of the
attribute.
This means with each value optimization, it might use extra space in the
range
(0B,2KB]. I think this extra space is independent of attribute size. So we
don't
need to worry about configuring this threshold. Let me know if I missed
something
here.


>  extern void toast_tuple_externalize(ToastTupleContext *ttc, int attribute,
>  int options);
> +extern void toast_tuple_opt_externalize(ToastTupleContext *ttc, int
> attribute,
> +int options, Datum
> old_toast_value,
> +ToastAttrInfo *old_toast_attr);
>
> Could we bake this into toast_tuple_externalize() so that all existing
> callers benefit from this optimization?  Is there a reason to export both
> functions?  Perhaps toast_tuple_externalize() should be renamed and made
> static, and then this new function could be called
> toast_tuple_externalize() (which would be a wrapper around the internal
> function).
>
This function is used only in heap_toast_insert_or_update(), all existing
callers are
using new function only. As you suggested, I have renamed the new function
as
wrapper function and also only exporting the new function.


>
> +/* Sanity check: if data is not compressed then we can proceed as
> usual. */
> +if (!VARATT_IS_COMPRESSED(DatumGetPointer(*value)))
> +toast_tuple_externalize(ttc, attribute, options);
>
> With a --enable-cassert build, this line causes assertion failures in the
> call to GetMemoryChunkContext() from pfree().  'make check' is enough to
> reproduce it.  Specifically, it fails the following assertion:
>
> AssertArg(MemoryContextIsValid(context));
>

Thanks for pointing it out, this failure started because I was not handling
the
case when the data is already compressed even before toasting. Following
check verifies if the data is compressed or not but that is not enough
because
we can't optimize the toasting if we didn't get the data in the
uncompressed form in the first place from the source.
+/* If data is not compressed then we can proceed as usual. */
+if (!VARATT_IS_COMPRESSED(DatumGetPointer(*value)))

v1 patch didn't have this problem because it was verifying whether we have
compressed data in this toasting round or not. I have changed back to the
earlier version.
+/* If data is not compressed then we can proceed as usual. */
+if (*value == orig_toast_value)



-- 
Regards,
Davinder
EnterpriseDB: http://www.enterprisedb.com


v3_0001_optimize_external_toast_storage.patch
Description: Binary data


Re: Optimize external TOAST storage

2022-03-10 Thread davinder singh
Thanks Dilip, I have fixed your comments, please find the updated patch.

On Tue, Mar 8, 2022 at 9:44 PM Dilip Kumar  wrote:.

> +/* incompressible, ignore on subsequent compression passes. */
> +orig_attr->tai_colflags |= TOASTCOL_INCOMPRESSIBLE;
>
> Do we need to set TOASTCOL_INCOMPRESSIBLE while trying to externalize
> it, the comment say "ignore on subsequent compression passes"
> but after this will there be more compression passes?  If we need to

set this TOASTCOL_INCOMPRESSIBLE then comment should explain this.
>
That was a mistake, this flag is not required at this point, as the
attribute is externalized it will be marked as TOASTCOL_IGNORE, and
such columns are not considered for compression, I removed it. Thanks for
pointing it out.

-- 
Regards,
Davinder
EnterpriseDB: http://www.enterprisedb.com


v2_0001_optimize_external_toast_storage.patch
Description: Binary data


Optimize external TOAST storage

2022-02-28 Thread davinder singh
Hi,

For Toast storage [1] in PostgreSQL, first, the attribute value is
compressed
and then divided into chunks. The problem with storing compressed value is,
if we
are not saving enough space such that it reduces the #chunks then we end up
adding extra decompression cost on every read.
Based on the discussion with Robert and Dilip, we tried to optimize this
process. The idea is to TOAST the compressed value only if we are saving at
least
1 chunk(2KB default) of disk storage else use the uncompressed one. In this
way,
we will save decompression costs without losing much on storage.

In our tests, we have observed improvement in the read performance up to
28% by
giving up at most TOAST_MAX_CHUNK_SIZE (2KB) bytes for storage on disk. The
gain is more on large attributes (size > 4KB) because
compression/decompression
cost increases with size.
However, We have found, this assumption is not true when the data
compression
ratio is more and the size is barely big enough to cause TOASTing. For
example,
in the following test 4. b, we didn't get any performance advantage but the
table
size grew by 42% by storing uncompressed values.
Test Setup.

Create table t1_lz4 ( a text compression lz4, b text compression lz4);
-- Generate random data
create or replace function generate_att_data(len_info int)
returns text
language plpgsql
as
$$
declare
   value text;
begin
   select array_agg(md5(g::text))
   into value
   from generate_series(1, round(len_info/33)::int) g;
   return value;
end;
$$;

--Test
Select b from t1_lz4;

Test 1:
Data: rows 20
insert into t1_lz4(a, b) select generate_att_data(364), repeat
(generate_att_data(1980), 2);
Summary:
Attribute size: original: 7925 bytes, after compression: 7845 bytes
Time for select: head: 42  sec, patch: 37 sec, *Performance Gain: 11%*
table size: Head 1662 MB, Patch: 1662 MB

Test 2:
Data: rows 10
insert into t1_lz4(a, b) select generate_att_data(364),
generate_att_data(16505);
Summary:
Attribute size: original: 16505 bytes, after compression: 16050 bytes
Time for select: head: 35.6  sec, patch: 30 sec, *Performance Gain: 14%*
table size: Head 1636 MB, Patch: 1688 MB

Test 3:
Data: rows 5
insert into t1_lz4(a, b) select generate_att_data(364),
generate_att_data(31685);
Summary:
Attribute size: original: 31685 bytes, after compression: 30263 bytes
Time for select: head: 35.4  sec, patch: 25.5 sec, *Performance Gain: 28%*
table size: Head 1601 MB, Patch: 1601 MB

Test 4.a:
Data: rows 20
insert into t1_lz4(a, b) select generate_att_data(11), repeat ('b', 250)
|| generate_att_data(3885);
Summary:
Attribute size: original: 3885 bytes, after compression: 3645 bytes
Time for select: head: 28 sec, patch: 26 sec, *Performance Gain: 7%*
*table size: Head 872 MB, Patch: 872 MB*

Test 4.b (High compression):
Data: rows 20
insert into t1_lz4(a, b) select generate_att_data(364), repeat
(generate_att_data(1980), 2);
Summary:
Attribute size: original: 3966 bytes, after compression: 2012 bytes
Time for select: head: 27  sec, patch: 26 sec, *Performance Gain: 0%*
*table size: Head 612 MB, Patch: 872 MB*

This is the worst case for this optimization because of the following 2
reasons.
First, the table size would grow by 50% when compressed size is half of the
original size and
yet barely large enough for TOASTing. Table size can't grow more than that
because If
compression reduces the size even more then it will reduce the #chunks as
well and it stores
the compressed value in the table.

Second, not much gain in performance because of the small attribute size,
more attributes fit

in page (almost twice), on each page access it can access twice the number
of rows. And

also small value means low compression/decompression costs.


We have avoided such cases by applying the optimization when attribute size
> 4KB.

Regards,
Davinder
EnterpriseDB: http://www.enterprisedb.com


v1_0001_optimize_external_toast_storage.patch
Description: Binary data


Re: PG compilation error with Visual Studio 2015/2017/2019

2020-05-06 Thread davinder singh
On Wed, May 6, 2020 at 10:11 AM Amit Kapila  wrote:

>
> > I think that the definition of get_iso_localename() should be consistent
> across all versions, that is HEAD like back-patched.
> >
>
> Fair enough.  I have changed such that get_iso_localename is the same
> in HEAD as it is backbranch patches.  I have attached backbranch
> patches for the ease of verification.
>

I have verified/tested the latest patches for all versions and didn't find
any problem.
-- 
Regards,
Davinder
EnterpriseDB: http://www.enterprisedb.com


Re: PG compilation error with Visual Studio 2015/2017/2019

2020-04-28 Thread davinder singh
On Tue, Apr 28, 2020 at 11:45 PM Juan José Santamaría Flecha <
juanjo.santama...@gmail.com> wrote:

>
> On Tue, Apr 28, 2020 at 5:16 PM davinder singh <
> davindersingh2...@gmail.com> wrote:
>
>> I have tested with different locales with codepages including above.
>> There are few which return different locale code but the error messages in
>> both the cases are the same. I have attached the test and log files.
>>
> But there was one case, where locale code and error messages both are
>> different.
>> Portuguese_Brazil.1252
>>
>> log from [1]
>> 2020-04-28 14:27:39.785 GMT [2284] DEBUG:  IsoLocaleName() executed;
>> locale: "pt"
>> 2020-04-28 14:27:39.787 GMT [2284] ERROR:  division by zero
>> 2020-04-28 14:27:39.787 GMT [2284] STATEMENT:  Select 1/0;
>>
>> log from [2]
>> 2020-04-28 14:36:20.666 GMT [14608] DEBUG:  IsoLocaleName() executed;
>> locale: "pt_BR"
>> 2020-04-28 14:36:20.673 GMT [14608] ERRO:  divisão por zero
>> 2020-04-28 14:36:20.673 GMT [14608] COMANDO:  Select 1/0;
>>
>
> AFAICT, the good result is coming from the new logic.
>
Yes, I also feel the same.

-- 
Regards,
Davinder
EnterpriseDB: http://www.enterprisedb.com


Re: PG compilation error with Visual Studio 2015/2017/2019

2020-04-28 Thread davinder singh
On Wed, Apr 29, 2020 at 8:24 AM Amit Kapila  wrote:

> BTW, do you see any different results for pt_PT with create_locale
> version or the new patch version being discussed here?
>
No, there is no difference for pt_PT. The difference you are noticing is
because of the previous locale setting.

-- 
Regards,
Davinder
EnterpriseDB: http://www.enterprisedb.com


Re: PG compilation error with Visual Studio 2015/2017/2019

2020-04-28 Thread davinder singh
On Mon, Apr 27, 2020 at 4:50 PM Amit Kapila  wrote:

> Bemba_Zambia
> Bena_Tanzania
> Bulgarian_Bulgaria
> Swedish_Sweden.1252
> Swedish_Sweden
>

I have tested with different locales with codepages including above. There
are few which return different locale code but the error messages in both
the cases are the same. I have attached the test and log files.
But there was one case, where locale code and error messages both are
different.
Portuguese_Brazil.1252

log from [1]
2020-04-28 14:27:39.785 GMT [2284] DEBUG:  IsoLocaleName() executed;
locale: "pt"
2020-04-28 14:27:39.787 GMT [2284] ERROR:  division by zero
2020-04-28 14:27:39.787 GMT [2284] STATEMENT:  Select 1/0;

log from [2]
2020-04-28 14:36:20.666 GMT [14608] DEBUG:  IsoLocaleName() executed;
locale: "pt_BR"
2020-04-28 14:36:20.673 GMT [14608] ERRO:  divisão por zero
2020-04-28 14:36:20.673 GMT [14608] COMANDO:  Select 1/0;

[1] full_locale_lc_message_test_create_locale_1.txt: log generated by using
the old patch (it uses _create_locale API to get locale info)
[2] full_locale_lc_message_test_getlocale_1.txt: log generated using the
patch v13

-- 
Regards,
Davinder
EnterpriseDB: http://www.enterprisedb.com
2020-04-28 14:27:36.353 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: 
"af"
2020-04-28 14:27:36.362 GMT [2284] ERROR:  division by zero
2020-04-28 14:27:36.362 GMT [2284] STATEMENT:  Select 1/0;
2020-04-28 14:27:36.411 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: 
"sq"
2020-04-28 14:27:36.414 GMT [2284] ERROR:  division by zero
2020-04-28 14:27:36.414 GMT [2284] STATEMENT:  Select 1/0;
2020-04-28 14:27:36.432 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: 
"am"
2020-04-28 14:27:36.435 GMT [2284] ERROR:  division by zero
2020-04-28 14:27:36.435 GMT [2284] STATEMENT:  Select 1/0;
2020-04-28 14:27:36.456 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: 
"ar_DZ"
2020-04-28 14:27:36.459 GMT [2284] ERROR:  division by zero
2020-04-28 14:27:36.459 GMT [2284] STATEMENT:  Select 1/0;
2020-04-28 14:27:36.480 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: 
"ar_BH"
2020-04-28 14:27:36.483 GMT [2284] ERROR:  division by zero
2020-04-28 14:27:36.483 GMT [2284] STATEMENT:  Select 1/0;
2020-04-28 14:27:36.505 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: 
"ar_EG"
2020-04-28 14:27:36.507 GMT [2284] ERROR:  division by zero
2020-04-28 14:27:36.507 GMT [2284] STATEMENT:  Select 1/0;
2020-04-28 14:27:36.529 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: 
"ar_IQ"
2020-04-28 14:27:36.531 GMT [2284] ERROR:  division by zero
2020-04-28 14:27:36.531 GMT [2284] STATEMENT:  Select 1/0;
2020-04-28 14:27:36.553 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: 
"ar_JO"
2020-04-28 14:27:36.555 GMT [2284] ERROR:  division by zero
2020-04-28 14:27:36.555 GMT [2284] STATEMENT:  Select 1/0;
2020-04-28 14:27:36.581 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: 
"ar_KW"
2020-04-28 14:27:36.584 GMT [2284] ERROR:  division by zero
2020-04-28 14:27:36.584 GMT [2284] STATEMENT:  Select 1/0;
2020-04-28 14:27:36.604 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: 
"ar_LB"
2020-04-28 14:27:36.606 GMT [2284] ERROR:  division by zero
2020-04-28 14:27:36.606 GMT [2284] STATEMENT:  Select 1/0;
2020-04-28 14:27:36.626 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: 
"ar_LY"
2020-04-28 14:27:36.634 GMT [2284] ERROR:  division by zero
2020-04-28 14:27:36.634 GMT [2284] STATEMENT:  Select 1/0;
2020-04-28 14:27:36.655 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: 
"ar_MA"
2020-04-28 14:27:36.665 GMT [2284] ERROR:  division by zero
2020-04-28 14:27:36.665 GMT [2284] STATEMENT:  Select 1/0;
2020-04-28 14:27:36.687 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: 
"ar_OM"
2020-04-28 14:27:36.697 GMT [2284] ERROR:  division by zero
2020-04-28 14:27:36.697 GMT [2284] STATEMENT:  Select 1/0;
2020-04-28 14:27:36.719 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: 
"ar_QA"
2020-04-28 14:27:36.722 GMT [2284] ERROR:  division by zero
2020-04-28 14:27:36.722 GMT [2284] STATEMENT:  Select 1/0;
2020-04-28 14:27:36.741 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: 
"ar"
2020-04-28 14:27:36.749 GMT [2284] ERROR:  division by zero
2020-04-28 14:27:36.749 GMT [2284] STATEMENT:  Select 1/0;
2020-04-28 14:27:36.774 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: 
"ar_SY"
2020-04-28 14:27:36.776 GMT [2284] ERROR:  division by zero
2020-04-28 14:27:36.776 GMT [2284] STATEMENT:  Select 1/0;
2020-04-28 14:27:36.795 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: 
"ar_TN"
2020-04-28 14:27:36.803 GMT [2284] ERROR:  division by zero
2020-04-28 14:27:36.803 GMT [2284] STATEMENT:  Select 1/0;
2020-04-28 14:27:36.824 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: 
"ar_AE"
2020-04-28 14:27:36.832 GMT [2284] ERROR:  division by zero
2020-04-28 14:27:36.832 GMT [2284] STATEMENT:  Select 1/0;
2020-04-28 14:27:36.853 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: 
"ar_YE"
2020-04-28 14:27:36.862 GMT [2284] ERROR:  

Re: PG compilation error with Visual Studio 2015/2017/2019

2020-04-23 Thread davinder singh
On Thu, Apr 23, 2020 at 6:49 PM Juan José Santamaría Flecha <
juanjo.santama...@gmail.com> wrote:

>
> On Thu, Apr 23, 2020 at 3:00 PM Amit Kapila 
> wrote:
>
>>
>> Thanks, I will verify these.  BTW, have you done something special to
>> get the error messages which are not in English because on my Windows
>> box I am not getting that in spite of setting it to the appropriate
>> locale.  Did you use ICU or something else?
>>
>
> If you are trying to view the messages using a CMD, I do not think is
> possible unless you have the OS language installed. I read the results from
> the log file.
>
I have checked the log file also but still, I am not seeing any changes in
error message language. I am checking two log files one is by enabling
Logging_collector in the conf file and the second is generated using
"pg_ctl -l" option.
I am using windows 10.
Is there another way you are generating the log file?
Did you install any of the locales manually you mentioned in the test file?

Also after initdb I am seeing only following standard locales in the
pg_collation catalog.
postgres=# select * from pg_collation;
  oid  | collname  | collnamespace | collowner | collprovider |
collisdeterministic | collencoding | collcollate | collctype | collversion
---+---+---+---+--+-+--+-+---+-
   100 | default   | 11 |10 | d | t   | -1 | | |
   950 | C | 11 |10 | c | t   | -1 | C | C |
   951 | POSIX | 11 |10 | c | t   | -1 | POSIX |
POSIX |
 12327 | ucs_basic |11 | 10 | c | t   | 6 | C |
C |
(4 rows)

Maybe Postgres is not able to get all the installed locales from the system
in my case. Can you confirm if you are getting different results in
pg_collation?

-- 
Regards,
Davinder
EnterpriseDB: http://www.enterprisedb.com


Re: PG compilation error with Visual Studio 2015/2017/2019

2020-04-20 Thread davinder singh
On Mon, Apr 20, 2020 at 10:10 AM Amit Kapila 
wrote:

> Yes, I am planning to look into it.  Actually, I think the main thing
> here is to ensure that we don't break something which was working with
> _create_locale API.

I am trying to understand how lc_messages affect the error messages on
Windows,
but I haven't seen any change in the error message like on the Linux system
we change lc_messages.
Can someone help me with this? Please let me know if there is any
configuration setting that I need to adjust.

-- 
Regards,
Davinder
EnterpriseDB: http://www.enterprisedb.com


Re: PG compilation error with Visual Studio 2015/2017/2019

2020-04-15 Thread davinder singh
Thanks for the review comments.

On Tue, Apr 14, 2020 at 9:12 PM Ranier Vilela  wrote:

> >>I m still working on testing this patch. If anyone has Idea please
> suggest.
> I still see problems with this patch.
>
> 1. Variable loct have redundant initialization, it would be enough to
> declare so: _locale_t loct;
> 2. Style white space in variable rc declaration.
> 3. Style variable cp_index can be reduced.
> if (tmp != NULL) {
>size_t cp_index;
>
> cp_index = (size_t)(tmp - winlocname);
> strncpy(loc_name, winlocname, cp_index);
> loc_name[cp_index] = '\0';
> 4. Memory leak if _WIN32_WINNT >= 0x0600 is true, _free_locale(loct); is
> not called.
>
I resolved the above comments.


> 5. Why call _create_locale if _WIN32_WINNT >= 0x0600 is true and loct is
> not used?
>
_create_locale can take bigger input than GetLocaleInfoEx. But we are
interested in
*language[_country-region[.code-page]]*. We are using _create_locale to
validate
the given input. The reason is we can't verify the locale name if it is
appended with
code-page by using GetLocaleInfoEx. So before parsing, we verify if the
whole input
locale name is valid by using _create_locale. I hope that answers your
question.

I have attached the patch.
-- 
Regards,
Davinder
EnterpriseDB: http://www.enterprisedb.com

On Tue, Apr 14, 2020 at 1:07 PM davinder singh 
wrote:

>
> On Fri, Apr 10, 2020 at 5:33 PM Amit Kapila 
> wrote:
> >
> > It seems the right direction to use GetLocaleInfoEx as we have already
> > used it to handle a similar problem (lc_codepage is missing in
> > _locale_t in higher versions of MSVC (cf commit 0fb54de9aa)) in
> > chklocale.c.  However, I see that we have added a manual parsing there
> > if GetLocaleInfoEx doesn't parse it.  I think that addresses your
> > concern for _create_locale handling bigger sets.  Don't we need
> > something equivalent here for the cases which GetLocaleInfoEx doesn't
> > support?
> I am in investigating in similar lines, I think the following explanation
> can help.
> From Microsoft doc.
> The locale argument to the setlocale, _wsetlocale, _create_locale, and
> _wcreate_locale is
> locale :: "locale-name"
> | *"language[_country-region[.code-page]]"*
> | ".code-page"
> | "C"
> | ""
> | NULL
>
> For GetLocaleInfoEx locale argument is
> *-*
> -

Re: PG compilation error with Visual Studio 2015/2017/2019

2020-04-14 Thread davinder singh
On Fri, Apr 10, 2020 at 5:33 PM Amit Kapila  wrote:
>
> It seems the right direction to use GetLocaleInfoEx as we have already
> used it to handle a similar problem (lc_codepage is missing in
> _locale_t in higher versions of MSVC (cf commit 0fb54de9aa)) in
> chklocale.c.  However, I see that we have added a manual parsing there
> if GetLocaleInfoEx doesn't parse it.  I think that addresses your
> concern for _create_locale handling bigger sets.  Don't we need
> something equivalent here for the cases which GetLocaleInfoEx doesn't
> support?
I am in investigating in similar lines, I think the following explanation
can help.
>From Microsoft doc.
The locale argument to the setlocale, _wsetlocale, _create_locale, and
_wcreate_locale is
locale :: "locale-name"
| *"language[_country-region[.code-page]]"*
| ".code-page"
| "C"
| ""
| NULL

For GetLocaleInfoEx locale argument is
*-*

Re: PG compilation error with Visual Studio 2015/2017/2019

2020-04-09 Thread davinder singh
On Wed, Apr 8, 2020 at 7:39 PM Juan José Santamaría Flecha

> Let me explain further, in pg_config_os.h you can check that the value of
> _WIN32_WINNT is solely based on checking _MSC_VER. This patch should also
> be meaningful for WIN32 builds using MinGW, or we might see this issue
> reappear in those  systems if update the  MIN_WINNT value to more current
> OS versions. So, I still think  _WIN32_WINNT is a better option.
>
Thanks for explanation, I was not aware of that, you are right it make
sense to use " _WIN32_WINNT", Now I am using this only.

I still see the same last lines in both #ifdef blocks, and pgindent might
> change a couple of lines to:
> + MultiByteToWideChar(CP_ACP, 0, winlocname, -1, wc_locale_name,
> + LOCALE_NAME_MAX_LENGTH);
> +
> + if ((GetLocaleInfoEx(wc_locale_name, LOCALE_SNAME,
> + (LPWSTR), LOCALE_NAME_MAX_LENGTH)) > 0)
> + {
>
Now I have resolved these comments also, Please check updated version of
the patch.


> Please open an item in the commitfest for this patch.
>
I have created with same title.


-- 
Regards,
Davinder
EnterpriseDB: http://www.enterprisedb.com


0001-PG-compilation-error-with-VS-2015-2017-2019.patch
Description: Binary data


Re: PG compilation error with Visual Studio 2015/2017/2019

2020-04-08 Thread davinder singh
On Tue, Apr 7, 2020 at 8:30 PM Juan José Santamaría Flecha <
juanjo.santama...@gmail.com> wrote:

>
> * The logic on "defined(_MSC_VER) && (_MSC_VER >= 1900)" is defined as
> "_WIN32_WINNT >= 0x0600" on other parts of the code. I would
> recommend using the later.
>
I think  "_WIN32_WINNT >= 0x0600" represents windows versions only and
doesn't include any information about Visual Studio versions. So I am
sticking to " defined(_MSC_VER) && (_MSC_VER >= 1900)".
I have resolved other comments. I have attached a new version of the patch.
-- 
Regards,
Davinder.


0001-PG-compilation-error-with-VS-2015-2017-2019.patch
Description: Binary data


Re: PG compilation error with Visual Studio 2015/2017/2019

2020-04-06 Thread davinder singh
On Mon, Apr 6, 2020 at 8:17 PM Juan José Santamaría Flecha <
juanjo.santama...@gmail.com> wrote:

>
> How do you reproduce this issue with Visual Studio? I see there is an
> ifdef directive above IsoLocaleName():
>
> #if defined(WIN32) && defined(LC_MESSAGES)
>
> I would expect  defined(LC_MESSAGES) to be false in MSVC.
>

You need to enable NLS support in the config file. Let me know if that
answers your question.

-- 
Regards,
Davinder.


PG compilation error with Visual Studio 2015/2017/2019

2020-04-06 Thread davinder singh
Hi All,

I am working on “pg_locale compilation error with Visual Studio 2017”,
Related threads[1],[2].
We are getting compilation error in static char *IsoLocaleName(const char
*winlocname) function in pg_locale.c file. This function is trying to
convert the locale name into the Unix style. For example, it will change
the locale name "en-US" into "en_US".
It is creating a locale using _locale_t _create_locale( int category, const
char *locale) and then trying to access the name of that locale by pointing
to internal elements of the structure loct->locinfo->locale_name[LC_CTYPE]
but it has been missing from the _locale_t since VS2015 which is causing
the compilation error. I found a few useful APIs that can be used here.

ResolveLocaleName and GetLocaleInfoEx both can take locale in the following
format.
-