Re: [PATCH] wwwdocs: contribute.html: Update consensus on patch content.
On 4/25/24 14:32, Richard Biener wrote: > > >> Am 25.04.2024 um 17:44 schrieb Carlos O'Donell : >> >> Discussion is here: >> https://inbox.sourceware.org/gcc/CAPS5khZeWkAD=v8ka9g5eecdnk3bdhfnzjumpvc+hedmkvj...@mail.gmail.com/ >> >> Rough consensus from Jakub Jelinek, Richard Biener and others is >> that maintainers are for the change. > > Ok Pushed. Thanks for helping me move this forward. I look forward to pre-commit CI across all the projects. Helping Linaro make forward progress here is my primary goal. -- Cheers, Carlos.
[PATCH] wwwdocs: contribute.html: Update consensus on patch content.
Discussion is here: https://inbox.sourceware.org/gcc/CAPS5khZeWkAD=v8ka9g5eecdnk3bdhfnzjumpvc+hedmkvj...@mail.gmail.com/ Rough consensus from Jakub Jelinek, Richard Biener and others is that maintainers are for the change. This changes the contribution notes to allow it. --- htdocs/contribute.html | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/htdocs/contribute.html b/htdocs/contribute.html index 7c1ae323..e8137edc 100644 --- a/htdocs/contribute.html +++ b/htdocs/contribute.html @@ -195,8 +195,9 @@ of your testing. The patch itself -Do not include generated files as part of the patch, just mention -them in the ChangeLog (e.g., "* configure: Regenerate."). +The patch should include everything you are changing (including +regenerated files which should be noted in the ChangeLog e.g. +"* configure: Regenerate."). -- 2.44.0
Re: [RFC] GCC Security policy
On 8/8/23 13:46, David Edelsohn wrote: > I believe that upstream projects for components that are imported > into GCC should be responsible for their security policy, including > libgo, gofrontend, libsanitizer (other than local patches), zlib, > libtool, libphobos, libcody, libffi, eventually Rust libcore, etc. I agree completely. We can reference the upstream and direct people to follow upstream security policy for these bundled components. Any other policy risks having conflicting guidance between the projects, which is not useful for security policy. There might be exceptions to this rule, particularly when the downstream wants to accept particular risks while upstream does not; but none of these components are in that case IMO. -- Cheers, Carlos.
Re: git out-of-order commit (was Re: [PATCH] Fortran: Remove unused declaration)
On 1/19/23 23:26, Bernhard Reutner-Fischer wrote: > On 19 January 2023 20:39:08 CET, Jason Merrill wrote: >> On Sat, Nov 12, 2022 at 4:24 PM Harald Anlauf via Gcc-patches >> wrote: >>> >>> Am 12.11.22 um 22:05 schrieb Bernhard Reutner-Fischer via Gcc-patches: This function definition was removed years ago, remove it's prototype. gcc/fortran/ChangeLog: * gfortran.h (gfc_check_include): Remove declaration. --- gcc/fortran/gfortran.h | 1 - 1 file changed, 1 deletion(-) --- Regtests cleanly, ok for trunk? diff --git a/gcc/fortran/gfortran.h b/gcc/fortran/gfortran.h index c4deec0d5b8..ce3ad61bb52 100644 --- a/gcc/fortran/gfortran.h +++ b/gcc/fortran/gfortran.h @@ -3208,7 +3208,6 @@ int gfc_at_eof (void); int gfc_at_bol (void); int gfc_at_eol (void); void gfc_advance_line (void); -int gfc_check_include (void); int gfc_define_undef_line (void); int gfc_wide_is_printable (gfc_char_t); >>> >>> OK, thanks. >> >> Somehow this was applied with a CommitDate in 2021, breaking scripts >> that assume monotonically increasing CommitDate. Anyone know how that >> could have happened? > > Sorry for that. > I think i cherry-picked this commit to master before pushing it, not 100% > sure though. > What shall we do now? I doubt a cherry-pick did this, we cherry pick often in glibc and the commit is added to the top of checkout and the commit date updated. There isn't anything we can do now. I was recently made aware that --since-as-filter= was added specifically to address this issue. https://patchwork.kernel.org/project/git/patch/YlnYDgZRzDI87b/z...@vmiklos.hu/ ~~~ This is similar to --since, but it will filter out not matching commits, rather than stopping at the first not matching commit. This is useful if you e.g. want to list the commits from the last year, but one odd commit has a bad commit date and that would hide lots of earlier commits in that range. The behavior of --since is left unchanged, since it's valid to depend on its current behavior. Signed-off-by: Miklos Vajna ~~~ "but one odd commit has a bad commit date" :-) We should try to avoid commits like this because they really complicate any date-based analysis tooling, and --since-as-filter is fairly new. -- Cheers, Carlos.
Re: [PATCH] Implement new hook for max_align_t_align
On 10/12/2016 10:17 AM, John David Anglin wrote: > On 2016-10-12 9:48 AM, Jason Merrill wrote: >> On Wed, Oct 12, 2016 at 4:02 AM, Jakub Jelinek wrote: >>> On Wed, Oct 12, 2016 at 09:52:04AM +0200, Florian Weimer wrote: dropping the alignment means that the padding before the lock member vanishes. Consequently, we have just created a silent ABI change in application code, which is a big no-no. >>> Sure, it would be an ABI change, but how many users would it affect? >>> Since this is PA-RISC, which is essentially dead (neither HPE nor Debian ship it anymore), I stand by my suggestion to bump the fundamental alignment >>> Or just drop support for a dead arch? >>> instead. Sure, it is a bit inefficient, but this will only affect PA-RISC users. It does not even cause work for PA-RISC porters. Conversely, if we work on this to come up with a different fix, many more people will be affected (because they don't get all the nice things we could work on instead), and we may need to maintain a special GCC kludge for the alternative solution, impacting GCC developers in particular. >>> But sure, bumping malloc alignment is probably easiest. And people who want >>> performance have better options than to stay on 32-bit PA-RISC anyway. >> Or we could do nothing and tell people to ignore the harmless warning. > The warning is an issue because of -Werror. However, it appears easy to > suppress it in the PA > backend. I have a patch that I'm testing. > > We are discussing offline regarding the glibc issue. It easy to bump the > alignment of malloc > but I take Jakub's point and maybe we should break the ABI. Debian unstable > churns > quickly, and I think we would be better off being consistent with the current > max_align_t > and 8-byte aligned malloc. I am against breaking the ABI. I would rather see us bump malloc alignment up to 16-bytes. The last time I changed this alignment it _immediately_ broken libstdc++ boostrap because it's using exactly the kind of embedded pthread_mutext_t we're talking about breaking. So in that case the debian builds in unstable broke right away, and I had to revert the change. We'd have to BinNMU a bunch of things to get this working again. Again I think our two options are, in my order of preference: - Disable the warning via a PA backend change. - Bump malloc alignment. I am sensitive to the first change being something that carries with it extra maintenance burden, so I'm happy to see the second solution chosen if that's what everyone wants (Florian's suggestion). -- Cheers, Carlos.
Re: [PATCH] Implement new hook for max_align_t_align
On 10/11/2016 04:04 PM, John David Anglin wrote: > On 2016-10-11 2:50 PM, Jason Merrill wrote: >> /* Alignment, in bits, a C conformant malloc implementation has to >> provide. >> The HP-UX malloc implementation provides a default alignment of 8 >> bytes. >> This can be increased with mallopt. The glibc implementation also >> provides >> 8-byte alignment. Note that this isn't enough for various POSIX >> types such >> as pthread_mutex_t. However, since we no longer need the 16-byte >> alignment >> for atomic operations, we ignore the nominal alignment specified >> for these >> types. The same is true for long double on 64-bit HP-UX. */ >> >> If PA malloc doesn't actually provide 16-byte alignment, this change >> seems problematic; it will mean any type that wants 16-byte alignment >> will silently get 8-byte alignment instead. > > I agree the situation is something of a mess. On linux, we could bump the > alignment > of malloc to 16-bytes. However, Carlos argued that we don't need to and I > think doing > so would be detrimental to performance. Correct, we do not need a 16-byte alignment at runtime. > The 16-byte alignment was used originally because the ldcw instruction used > for atomic > operations in linux threads needs 16-byte alignment. However, the nptl > pthread > implementation now uses a kernel helper for atomic operations. It only needs > 4-byte alignment. The largest alignment actually needed is for long double > (8 bytes). > However, we can't change the 16-byte alignment without affecting the layout > of various > structures. Correct, the structure padding needs to continue to be there to match the original ABI. > The same is true for long double on HPUX. Originally, it was planned to > implement it > in hardware and that would have required 16-byte alignment. It was only > implemented > in software with an 8-byte alignment requirement. Somehow, it ended up with > 8 and > 16-byte alignment in the HP 32 and 64-bit compilers, respectively. In both > cases, malloc > has 8-byte alignment. It is possible to increase the "grain" size of HP > malloc to 16 bytes. > > Thus, I don't think the silent reduction to 8-byte alignment matters. > Without the change, > we are faced with spurious warnings from new. I suggested disabling the warnings. https://gcc.gnu.org/ml/gcc-patches/2016-09/msg01445.html Which is what Jason suggests here: https://gcc.gnu.org/ml/gcc-patches/2016-10/msg00786.html Though Florian Weimer suggests just bumping malloc to return 16-byte aligned objects and raising max_align_t, since conceptually that's simple (but will impact performance): https://gcc.gnu.org/ml/gcc-patches/2016-09/msg01446.html I think the case where a user specifically requests a larger aligment is still always bound to fail if they exceed malloc's aligment. So removing the warning just leaves hppa where it is today. No warning. Working with existing malloc alignment. But unable to warn users of legitimate cases where this might matter? I still suggest disabling the warning. -- Cheers, Carlos.
Re: [gofrontend-dev] Go patch committed: Avoid warning by using a local var for std::ofstream
On 09/21/2016 10:56 AM, Florian Weimer wrote: > * John David Anglin: > >> On Tue, Sep 20, 2016 at 09:27:17PM +0200, Florian Weimer wrote: >>> * Ian Lance Taylor: >>> GCC PR 77625 is about a warning while compiling the Go frontend. The Go frontend called `new std::ofstream()`, and the warning is "error: `new' of type `std::ofstream {aka std::basic_ofstream}' with extended alignment 16". Frankly I'm not sure how this supposed to work: shouldn't it be possible to write new std::ofstream()? But the problem is easy to avoid, since in this case the std::ofstream can be a local variable, and that is a better approach anyhow. This patch was bootstrapped on x86_64-pc-linux-gnu. Committed to mainline. >>> >>> This happens only on hppa, right? It looks as if libstdc++ is >>> seriously broken there. >> >> I'm not sure I understand the comment. I created a short testcase. >> I agree that the issue is hppa specific but the extended alignment >> is coming from the __lock field in pthread_mutex_t. I'm not sure >> why this affects new std::ofstream(). > > Ahh, thanks for the explanation. The C++ streams probably have some > internal locks (hidden behind libstdc++ abstractions). The alignment is a historical ABI accident that we inherited from Linuxthreads. >> The alignment of 16 arises in code that used the ldcw instruction. >> Although this could be avoided in glibc there are numerous other >> packages with objects requiring 16-byte alignment. So, I'm tending >> to think the typedef for max_align_t should be adjusted on hppa-linux >> so that it has 16-byte alignment. Is that the correct approach? > > Yes, and for the warning to go away, GCC's MALLOC_ABI_ALIGNMENT needs > to be increased as well, I think. This is not required. The __attribute__((__aligned__(16))); is not required for pthread locks to work correctly, it is simply there to ensure structure padding and layout matches the pre-NPTL ABI to ensure we don't break ABI. In the original Linuxthreads code there was indeed a singular lock word that needed 16-byte alignment because the hppa 'load and clear word' atomic operation needed that alignment. In NPTL the alignment restrictions are lifted and we use a light-weight kernel helper (like ARM and m68k) which does compare-and-swap atomically using kernel-side locks (synchronized with futex operations in the kernel). So the alignment is only needed for structure layout. Malloc can return word aligned memory for a pthread_mutex_t and it would work just fine. The broader question is: How do we get rid of the warning? > Nowadays, we can change the glibc malloc alignment fairly easily, but > interposed mallocs won't be so nice. But it is unlikely that any of > this matters in practice because the issue is not new (only the > warning is). The issue is not new, but changing glibc's malloc alignment is not required. > I have Cc:ed a few people who might have ideas how to deal with this. > > Torvald, would it be possible to align mutexes internally on hppa, to > avoid the 16-byte alignment of the entire struct (that is, store a > pointer to the actual mutex object, which points to a sub-region of > the struct which is suitably aligned)? The attribute aligned is used for laying out larger structures where the pthread structures are embedded in them. Therefore removing attribute aligned would result in an ABI break for all such embedded structures. The attribute aligned must remain for such structures to continue to be laid out the same way they were with Linuxthreads, and now NPTL. As I mentioned before, the alignment is not required for the correct operation of the locks. > We probably could add libstdc++ compile-time tests which make sure > that all relevant C++ types can be allocated with the global operator > new. So how do we get rid of the warning? We need to keep __attribute__((__aligned(16))); because of structure layout ABI compatibility, but we don't actually need the alignment at runtime anymore. -- Cheers, Carlos.
Re: [PATCH] Support x86-64 TLS code sequences without PLT
On 06/03/2016 05:21 PM, H.J. Lu wrote: > We can generate x86-64 TLS code sequences for general and local dynamic > models without PLT, which uses indirect call via GOT: > > call *__tls_get_addr@GOTPCREL(%rip) > > instead of direct call: > > call __tls_get_addr[@PLT] What are the actual pros and cons of this change? Does this improve security? Performance? The __tls_get_addr symbol, on x86_64, lives in ld.so, which generally means that all shared objects (GD usage) indirect through their PLT/GOT to make the call. In this model, and because of lazy linking, the PLT-related GOT entries are left read-write to be updated after resolution (ignore the BIND_NOW + RELRO case since in that case we do all of this up front). After your change, without a PLT entry, these symbols can no longer be interposed? The static linker would generate a binding (a got reloc for the symbol which is resolved by the dynamic loader) that cannot be changed, becomes RO after RELRO? Is the security benefit worth the loss of interposition for this symbol? Is there any performance gains? -- Cheers, Carlos.
Re: An abridged "Writing C" for the gcc web pages
On 05/03/2016 06:39 PM, Bernd Schmidt wrote: > On 05/03/2016 09:59 PM, Richard Sandiford wrote: >> >> And sometimes there are multiple acceptable ways of writing the >> same code. Which wins is an aesthetic choice, which tools tend to >> be bad at handling. E.g. if a parameter list is too long for a >> line, there's often a choice of how to balance the parameters >> across multiple lines, with some being more readable than others. >> I wouldn't want the one chosen by a formatting tool to be the only >> acceptable one. > > I have all the same reservations about using tools for this. We don't > call it "style" for nothing, it is not purely mechanical and until we > have general-purpose AI I don't believe computers can do a > satisfactory job. I've seen attempts to do so in the past, and they > have failed - you invariably end up with humans contorting their code > to have it pass the checker, which is entirely counterproductive. This would be silly. I'd never impose the tool as a commit blocker. We have strayed from the initial goal: Help new developers submit well formatted patches, and learn by example using a tool. -- Cheers, Carlos.
Re: An abridged "Writing C" for the gcc web pages
On 05/03/2016 03:59 PM, Richard Sandiford wrote: > Jeff Law writes: >> On 05/02/2016 02:40 PM, Carlos O'Donell wrote: >>> >>> However, in the end, I think that yet-another-document is not the >>> solution we want. What we actually need is a program that just formats >>> your source according to the GNU Coding Style and that way you can always >>> tell your users "Run indent" and be done with it. The output of such a >>> program should always be considered correct, and if it's not, we should >>> fix it immediately. >>> >>> Why can't we use indent? >> Sadly, "indent" simply breaks c++ code. >> >> I think the solution here is clang-format with a suitable clang-format >> configuration file. One has been started (gcc/contrib/clang-format), >> but it's not yet complete. > > How far are you intending to go with that though? As far as it takes to make the process of submitting patches less painful for new contributors and sensible for maintainers? If we end up with one tool per branch then we did it wrong. In glibc I'm happy to retroactively say the newest formatting tool is always right for all branches barring expert intervention. > And if the tool isn't the final authority, and it does still remain > a human choice, then... We *are* the community, so yes, there is a final line in the sand where a sensible expert maintainer may tell you "yeah but" and ask you to change the code to make it more legible. However, 99% of the changes are now just going to be automatic. >>> At the end of the day I never want to see another comment about code >>> formatting because the user used X and X always formats code correctly. >> Amen. > > ...this kind of comment is still going to occur. And clear documentation > should help when it does. Yes, it will occur, but the frequency should be much reduced. I also agree that clear documentation helps, so I don't disparage Bernd's work here, I think it's great. I just think that the most important next step here is a submission code formatting tool. > Also, coding standards and the scope of Bernd's document are more > than formatting. E.g. take your comment in a recent thread about not > adding "()" when referring to a function in comments. Even clang-format > is unlikely to be smart enough to know whether a particular () is > acceptable or not. (It would be acceptable in a quoted code snippet > but not in a normal sentence.) Also, the tool wouldn't know when > argument names need to be capitalised in comments, etc. Sometimes > argument names are English words and comments contain a mixture of > both uses. Bernd's document talked about those kinds of details too. Agreed. That's minor. > Sorry for the rant :-) Maybe I'm just jaded by past experience with > automatic formatting tools. E.g. we internally use(d?) check_GNU_style.sh > to check patches before submission and it seemed to produce far more > false positives than true positives. (I'm not sure it ever produced > what I'd consider a true positive for me, although I did sometimes do > what it said even if I disagreed, as the path of least resistance.) > That's not to say the script is bad, just that it shouldn't always be > taken literally. That's a shame and a clear indicator the script is wrong or the technology wasn't around for us to do the kinds of things we wanted. > Obviously clang-format is smarter than check_GNU_style.sh but I think > the same principle applies. If we get one false positive out of the new checker we've done something wrong. It should strive to give output that everyone agrees is 100% correct barring optinal aesthetic choices. This way the tool is run, a reviewer looks at the patch, and final comments are made on the style. That's what I'd like to see for glibc. $0.02. -- Cheers, Carlos.
Re: An abridged "Writing C" for the gcc web pages
On 04/22/2016 12:21 PM, Bernd Schmidt wrote: > (Apologies if you get this twice, the mailing list didn't like the > html attachment in the first attempt). > > We frequently get malformatted patches, and it's been brought to my > attention that some people don't even make the effort to read the GNU > coding standards before trying to contribute code. TL;DR seems to be > the excuse, and while I find that attitude inappropriate, we could > probably improve the situation by spelling out the most basic rules > in an abridged document on our webpages. Below is a draft I came up > with. Thoughts? Despite the comments downthread, I think that abridged versions of longish standards documents are always a good idea. They need to be maintained, usually reactively, and that's fine. It gives a new contributor something to read that's short enough to provide the salient points for an initial patch/review cycle. The point is to get better incrementally. Nobody is ever perfect the first time. I especially like that Jason caught on that your document is actually *more* than just the GNU Coding Standard, and that where the standard is underspecified the projects have their own conventions. For examples just look at: https://sourceware.org/glibc/wiki/Style_and_Conventions However, in the end, I think that yet-another-document is not the solution we want. What we actually need is a program that just formats your source according to the GNU Coding Style and that way you can always tell your users "Run indent" and be done with it. The output of such a program should always be considered correct, and if it's not, we should fix it immediately. Why can't we use indent? If we can't use indent, what do we need to solve this problem? At the end of the day I never want to see another comment about code formatting because the user used X and X always formats code correctly. -- Cheers, Carlos.
Re: [PATCH] extend.texi: Expand on the perils of using the 'leaf' attribute.
On 03/14/2016 06:15 PM, Sandra Loosemore wrote: > On 03/14/2016 12:40 PM, Carlos O'Donell wrote: >> Using the 'leaf' attribute is difficult in certain use cases, and >> the documentation rightly points out that signals is one such >> problem. >> >> We should additionally document the following caveats: >> >> * Indirect function resolvers (thanks to Florian Weimer for >> catching this). * Indirect function implementations * ELF symbol >> interposition. >> >> [snip] >> >> gcc/ 2016-03-14 Carlos O'Donell >> >> * doc/extend.texi (Common Function Attributes): Describe ifunc >> impact on leaf attribute. >> > > H. Both your patch and the original text really need some > copy-editing to fix noun/verb agreement, punctuation, etc. How about > something like the attached patch? I just threw this together and > haven't tested this in any way, but you confirm that it builds and it > looks OK to you, feel free to check it in. PDF looks good. Committed as r234247. 2016-03-16 Carlos O'Donell Sandra Loosemore * doc/extend.texi (Common Function Attributes): Describe ifunc impact on leaf attribute. Mention ELF interposition problems. Index: gcc/doc/extend.texi === --- gcc/doc/extend.texi (revision 234236) +++ gcc/doc/extend.texi (revision 234247) @@ -2772,30 +2772,41 @@ @item leaf @cindex @code{leaf} function attribute -Calls to external functions with this attribute must return to the current -compilation unit only by return or by exception handling. In particular, leaf -functions are not allowed to call callback function passed to it from the current -compilation unit or directly call functions exported by the unit or longjmp -into the unit. Leaf function might still call functions from other compilation -units and thus they are not necessarily leaf in the sense that they contain no -function calls at all. +Calls to external functions with this attribute must return to the +current compilation unit only by return or by exception handling. In +particular, a leaf function is not allowed to invoke callback functions +passed to it from the current compilation unit, directly call functions +exported by the unit, or @code{longjmp} into the unit. Leaf functions +might still call functions from other compilation units and thus they +are not necessarily leaf in the sense that they contain no function +calls at all. -The attribute is intended for library functions to improve dataflow analysis. -The compiler takes the hint that any data not escaping the current compilation unit can -not be used or modified by the leaf function. For example, the @code{sin} function -is a leaf function, but @code{qsort} is not. +The attribute is intended for library functions to improve dataflow +analysis. The compiler takes the hint that any data not escaping the +current compilation unit cannot be used or modified by the leaf +function. For example, the @code{sin} function is a leaf function, but +@code{qsort} is not. -Note that leaf functions might invoke signals and signal handlers might be -defined in the current compilation unit and use static variables. The only -compliant way to write such a signal handler is to declare such variables -@code{volatile}. +Note that leaf functions might indirectly run a signal handler defined +in the current compilation unit that uses static variables. Similarly, +when lazy symbol resolution is in effect, leaf functions might invoke +indirect functions whose resolver function or implementation function is +defined in the current compilation unit and uses static variables. There +is no standard-compliant way to write such a signal handler, resolver +function, or implementation function, and the best that you can do is to +remove the @code{leaf} attribute or mark all such static variables +@code{volatile}. Lastly, for ELF-based systems that support symbol +interposition, care should be taken that functions defined in the +current compilation unit do not unexpectedly interpose other symbols +based on the defined standards mode and defined feature test macros; +otherwise an inadvertent callback would be added. -The attribute has no effect on functions defined within the current compilation -unit. This is to allow easy merging of multiple compilation units into one, -for example, by using the link-time optimization. For this reason the -attribute is not allowed on types to annotate indirect calls. +The attribute has no effect on functions defined within the current +compilation unit. This is to allow easy merging of multiple compilation +units into one, for example, by using the link-time optimization. For +this reason the attribute is not allowed on types to annotate indirect +calls. - @item malloc @cindex @code{malloc} function attribute @cindex functions that behave like malloc -- Cheers, Carlos.
Re: [PATCH] extend.texi: Expand on the perils of using the 'leaf' attribute.
On 03/14/2016 06:15 PM, Sandra Loosemore wrote: > On 03/14/2016 12:40 PM, Carlos O'Donell wrote: >> Using the 'leaf' attribute is difficult in certain use cases, and >> the documentation rightly points out that signals is one such >> problem. >> >> We should additionally document the following caveats: >> >> * Indirect function resolvers (thanks to Florian Weimer for >> catching this). * Indirect function implementations * ELF symbol >> interposition. >> >> [snip] >> >> gcc/ 2016-03-14 Carlos O'Donell >> >> * doc/extend.texi (Common Function Attributes): Describe ifunc >> impact on leaf attribute. >> > > H. Both your patch and the original text really need some > copy-editing to fix noun/verb agreement, punctuation, etc. How about > something like the attached patch? I just threw this together and > haven't tested this in any way, but you confirm that it builds and it > looks OK to you, feel free to check it in. Hey Sandra! :-) Testing right now. I like your text better. I'll commit once I make sure I haven't made a mistake in the formatting. -- Cheers, Carlos.
[PATCH] extend.texi: Expand on the perils of using the 'leaf' attribute.
Using the 'leaf' attribute is difficult in certain use cases, and the documentation rightly points out that signals is one such problem. We should additionally document the following caveats: * Indirect function resolvers (thanks to Florian Weimer for catching this). * Indirect function implementations * ELF symbol interposition. Note that neither the C nor C++ standards talks at all about how memory is synchronized between the current execution context and that of a signal handler. Therefore this patch rewords the text to say "There is no standards compliant way..." although in practice is just works and one would expect the standards (POSIX) to adopt such language that existing practice works. Lastly, we mention that the 'leaf' attribute might simply be removed if that is the easiest option. OK to checkin? For completeness the motivating example from a user was like this: cat >> leaf.c < #include #include #include static int tst; void my_h(int sig) { if (tst == 1) _exit (0); _exit (1); } int main() { signal(SIGUSR1, my_h); tst++; pthread_kill(pthread_self(), SIGUSR1); tst--; return 2; } EOF gcc -g3 -O3 -o leaf leaf.c -lpthread ./test; echo $? 1 Where the global write of tst is elided by the compiler because pthread_kill is marked __THROW (includes leaf). It's an open question if pthread_kill should or should not use __THROWNL. Even if we fix that in glibc, the changes below to the docs are still important clarifications. gcc/ 2016-03-14 Carlos O'Donell * doc/extend.texi (Common Function Attributes): Describe ifunc impact on leaf attribute. Index: extend.texi === --- extend.texi (revision 234183) +++ extend.texi (working copy) @@ -2786,9 +2786,17 @@ is a leaf function, but @code{qsort} is not. Note that leaf functions might invoke signals and signal handlers might be -defined in the current compilation unit and use static variables. The only -compliant way to write such a signal handler is to declare such variables -@code{volatile}. +defined in the current compilation unit and use static variables. Similarly, +when lazy symbol resolution is in effect, leaf functions might invoke indirect +functions whose resolver function or implementation function might be defined +in the current compilation unit and use static variables. There is no standards +compliant way to write such a signal handler, resolver function, or +implementation function, and the best that you can do is to remove the leaf +attribute or mark all such variables @code{volatile}. Lastly, for ELF-based +systems which support symbol interposition one should take care that functions +defined in the current compilation unit do not unexpectedly interpose other +symbols based on the defined standards mode otherwise an inadvertent callback +would be added. The attribute has no effect on functions defined within the current compilation unit. This is to allow easy merging of multiple compilation units into one, -- Cheers, Carlos.
Re: [RFC PATCH] -Wmemset-transposed-args (PR middle-end/61294)
On 07/08/2014 03:24 PM, Jason Merrill wrote: > I don't think we want to warn about e.g. 1-1, only about literal 0. What rationale would you give for not warning on 1-1? Cheers, Carlos.
Re: [RFC PATCH] -Wmemset-transposed-args (PR middle-end/61294)
On 07/08/2014 08:50 AM, Jakub Jelinek wrote: > Hi! > > This is an attempt to move the warning about transposed memset arguments > from the glibc headers to gcc FEs. The problem with the warning in glibc > is that it uses __builtin_constant_p and e.g. jump threading very often > makes the warning trigger even on code where it is very unlikely a user > swapped arguments. See e.g. > https://gcc.gnu.org/PR51744 > https://gcc.gnu.org/PR56977 > https://gcc.gnu.org/PR61294 > https://bugzilla.redhat.com/452219 > https://bugs.kde.org/show_bug.cgi?id=311098 > https://bugzilla.mozilla.org/show_bug.cgi?id=581227 > and many others. Thus, I'd like to warn in the FEs instead, and > once we have a GCC release with that warning in, disable the glibc > bits/string3.h: > if (__builtin_constant_p (__len) && __len == 0 > && (!__builtin_constant_p (__ch) || __ch != 0)) > { > __warn_memset_zero_len (); > return __dest; > } > warning for GCC versions with that new warning in. > > Any thoughts on this? > > If you are ok with it, either we can add it only for 4.10/5.0 and > later only, or perhaps 4.9.2 too, or even 4.9.1. For -D_FORTIFY_SOURCE=2 > built code with glibc it shouldn't make a difference (other than having > fewer false positives), but for other non-fortified -Wall compilation > it would make a difference (introducing new warnings), so perhaps > doing it only for 4.10/5.0+ is best. This seems like a sensible solution to fixing the false positives caused by jump threading (I haven't looked into that in detail, just briefly). I would prefer we enable this for 4.10/5.0+ if only to avoid the fallout (new warnings) that would happen for the distributions. We can fix the glibc header once the fix is in gcc, unless someone objects to the fix itself. Cheers, Carlos.
[PATCH] MAINTAINERS: Update email address.
I'm working at Red Hat now as the glibc team lead within the tools group. Please feel free to reach out to me if you have any glibc related questions. Still having fun working on GNU tools :-) Committed. 2013-08-09 Carlos O'Donell * MAINTAINERS (Write After Approval): Update email. Index: MAINTAINERS === --- MAINTAINERS (revision 201643) +++ MAINTAINERS (working copy) @@ -478,7 +478,7 @@ Dan Nicolaescu d...@ics.uci.edu Dorit Nuzman do...@il.ibm.com David O'Brien obr...@freebsd.org -Carlos O'Donellcar...@codesourcery.com +Carlos O'Donellcar...@redhat.com Peter O'Gorman po...@thewrittenword.com Andrea Ornsteinandrea.ornst...@st.com Seongbae Park seongbae.p...@gmail.com --- Cheers, Carlos.
Re: [PATCH, libstdc++] Fix 22_locale/time_get/get_weekday/char/38081-[12].cc tests for glibc 2.17
On 02/11/2013 12:28 PM, H.J. Lu wrote: > On Mon, Feb 11, 2013 at 9:18 AM, Paolo Carlini > wrote: >> On 02/11/2013 04:33 PM, Julian Brown wrote: >>> >>> Hi, >>> >>> It seems that glibc 2.17 changes the abbreviated names of weekdays for >>> "ru_RU" locales by removing an extraneous ".", as described in: >>> >>> http://sourceware.org/bugzilla/show_bug.cgi?id=10873 >>> >>> An earlier patch (circa glibc 2.14) changed (IIUC!) archaic/unusual >>> three-letter abbreviations to more-common two-letter abbreviations, but >>> included dots after each weekday name, which was apparently still wrong. >>> But, the two tests of this feature in the libstdc++ testsuite expect >>> those dots to be present, so they fail. >>> >>> So, the attached patch simply removes the expectation that dots are >>> present in the abbreviated names from the libstdc++ tests in question, >>> if the glibc version in use is recent enough. >>> >>> The tests pass (with a current gcc, trunk eglibc) with the attached >>> patch, and fail (for me) without it (cross-testing to ARM Linux, for >>> no particular reason). OK to apply? >> >> I think it's Ok, yes. Thanks. However, I would appreciate if somebody with a >> glibc 2.17 system at hand could double check. Maybe HJ? >> > > I am not familiar with locale. CC to glibc mailing list. Paolo, glibc 2.17 and eglibc 2.17 are so close that it should be fine to test just one. I'd think the change should be fine for libstdc++. Upstream will be working to reduce the differences between eglibc and glibc so eventually these reports will just say "glibc" in their testing notes. Does that make sense? Cheers, Carlos.
Re: PATCH to libstdc++ to use __cxa_thread_atexit_impl if available
On 01/22/2013 07:24 PM, Siddhesh Poyarekar wrote: > Cc'ing Carlos on this so that he's aware of it. > > Siddhesh > > Jakub Jelinek wrote: > > On Sat, Jan 19, 2013 at 12:22:23PM -0500, Jason Merrill wrote: >> Siddhesh has a patch to implement the thread atexit functionality in >> glibc in order to integrate better with the dynamic loader and run >> the cleanups in the correct order. Once it's available there, this >> patch will make the copy in libsupc++ use it. The main >> __cxa_thread_atexit function will always live in libsupc++, however, >> in order to maintain ABI compatibility between releases of >> libstdc++. >> >> Does this configure change look right, or should it go in linkage.m4 >> somewhere? >> >> I think I'll hold off checking this in until Siddhesh's patch goes >> into glibc. > > Yeah, it should make it into glibc first (no idea why is it taking so long), > otherwise we can't be sure it won't be implemented differently in glibc. > > Jakub > If there is a patch that needs to be reviewed immediately then Siddhesh should ping that patch and CC me so I can do a timely review. I'll look into this patch tomorrow. In general we lack qualified reviewers in glibc. We try our best to get through the incoming patches, but given the new friendliness of the community we're flooded with patches, but not with reviewers. Please have patience has the community grows up. Cheers, Carlos.
Re: [PATCH v2] ARM: Use different linker path for hardfloat ABI
On Mon, Apr 23, 2012 at 5:36 PM, Michael Hope wrote: > 2012-04-24 Michael Hope > Richard Earnshaw > > * config/arm/linux-eabi.h (GLIBC_DYNAMIC_LINKER_SOFT_FLOAT): Define. > (GLIBC_DYNAMIC_LINKER_HARD_FLOAT): Define. > (GLIBC_DYNAMIC_LINKER_DEFAULT): Define. > (GLIBC_DYNAMIC_LINKER): Redefine to use the hard float path. > > diff --git a/gcc/config/arm/linux-eabi.h b/gcc/config/arm/linux-eabi.h > index 80bd825..2ace6f0 100644 > --- a/gcc/config/arm/linux-eabi.h > +++ b/gcc/config/arm/linux-eabi.h > @@ -62,7 +62,17 @@ > /* Use ld-linux.so.3 so that it will be possible to run "classic" > GNU/Linux binaries on an EABI system. */ > #undef GLIBC_DYNAMIC_LINKER > -#define GLIBC_DYNAMIC_LINKER "/lib/ld-linux.so.3" > +#define GLIBC_DYNAMIC_LINKER_SOFT_FLOAT "/lib/ld-linux.so.3" > +#define GLIBC_DYNAMIC_LINKER_HARD_FLOAT "/lib/ld-linux-armhf.so.3" > +#if TARGET_DEFAULT_FLOAT_ABI == ARM_FLOAT_ABI_HARD > +#define GLIBC_DYNAMIC_LINKER_DEFAULT GLIBC_DYNAMIC_LINKER_HARD_FLOAT > +#else > +#define GLIBC_DYNAMIC_LINKER_DEFAULT GLIBC_DYNAMIC_LINKER_SOFT_FLOAT > +#endif > +#define GLIBC_DYNAMIC_LINKER \ > + "%{mfloat-abi=hard:" GLIBC_DYNAMIC_LINKER_HARD_FLOAT "} \ > + %{mfloat-abi=soft*:" GLIBC_DYNAMIC_LINKER_SOFT_FLOAT "} \ > + %{!mfloat-abi=*:" GLIBC_DYNAMIC_LINKER_DEFAULT "}" > > /* At this point, bpabi.h will have clobbered LINK_SPEC. We want to > use the GNU/Linux version, not the generic BPABI version. */ This patch is broken. Please fix this. You can't use a named enumeration in cpp equality. The type ARM_FLOAT_ABI_HARD is a named enumeration and evaluates to 0 as an unknown identifier. Therefore "#if TARGET_DEFAULT_FLOAT_ABI == ARM_FLOAT_ABI_HARD" evaluates to "#if 0 == 0" and is always true. Watch out that "#define ARM_FLOAT_ABI_HARD ARM_FLOAT_ABI_HARD" for such enums is not conforming C99/C11. I suggest you define the types as macros and then set the named enum to those values, then use the macros in the header equality checks. e.g. #define VAL1 0 then enum FOO { RVAL1 = VAL1, ... } Look at arm.h for the enum definition. Cheers, Carlos.
Re: [PATCH v2] ARM: Use different linker path for hardfloat ABI
On Sun, Apr 22, 2012 at 6:20 PM, Michael Hope wrote: > Change the dynamic linker path for ARM hard float executables. > Matches the path discussed and agreed on last week[1]. Carlos will > follow up with the matching patch to GLIBC[2]. I'm happy to if he's > busy. I'm testing a glibc patch with Richard's alternate version. If all goes well I'll post it tomorrow for review on libc-po...@sourceware.org. Cheers, Carlos.
Re: [PATCH] ARM: Use different linker path for hardfloat ABI
On Tue, Apr 3, 2012 at 6:56 PM, Joseph S. Myers wrote: > (e) Existing practice for cases that do use different dynamic linkers is > to use a separate library directory, not just dynamic linker name, as in > lib32 and lib64 for MIPS or libx32 for x32; it's certainly a lot easier to > make two sets of libraries work in parallel if you have separate library > directories like that. So it would seem more appropriate to define a > directory libhf for ARM (meaning you need a binutils patch as well to > handle that directory, I think), and these different Debian-style names > could be implemented separately in a multiarch patch if someone submits > one that properly accounts for my review comments on previous patch > versions (failure to produce such a fixed patch being why Debian multiarch > directory support has not got into GCC so far). The thread doesn't seem to be wrapping up, instead it appears to go in circles :-) As a glibc maintainer, when it comes to this issue, I am guided by: (1) This is a distribution problem and the solution needs to come from a consensus among the distributions. (2) The gcc/glibc community should listen to the distributions. The distributions have the most experience when it comes to whole-system issues. I certainly don't have that experience. Unfortunately *I* give the distributions a B- or C+ for communication. Please make it easy for me to give you an A. It is exceedingly difficult for me to review solutions that span multiple patches, emails, mailing lists, and communities. The best way to avoid rehashing old problems is to document the design and get sign off from the interested parties. If I see uncoordinated and conflicting responses from the distributions... I get worried. Is there a proposal on a wiki that begins to summarize the suggestions and solution? Cheers, Carlos.
Re: struct siginfo vs. siginfo_t (was: GNU C Library master sources branch, master, updated. glibc-2.15-229-g4efeffc)
On Thu, Mar 15, 2012 at 11:05 AM, Thomas Schwinge wrote: > Hi! > > On 26 Feb 2012 18:17:52 -, drep...@sourceware.org wrote: >> http://sources.redhat.com/git/gitweb.cgi?p=glibc.git;a=commitdiff;h=4efeffc1d583597e4f52985b9747269e47b754e2 >> >> commit 4efeffc1d583597e4f52985b9747269e47b754e2 >> Author: Ulrich Drepper >> Date: Sun Feb 26 13:17:27 2012 -0500 >> >> Fix up POSIX testing in conformtest > >> [...] >> + * sysdeps/unix/sysv/linux/bits/siginfo.h: Don't name siginfo_t >> + struct. [...] >> [...] > >> diff --git a/sysdeps/unix/sysv/linux/bits/siginfo.h >> b/sysdeps/unix/sysv/linux/bits/siginfo.h >> index ecef39d..0635e2f 100644 >> --- a/sysdeps/unix/sysv/linux/bits/siginfo.h >> +++ b/sysdeps/unix/sysv/linux/bits/siginfo.h >> [...] >> @@ -47,7 +47,7 @@ typedef union sigval >> # define __SI_PAD_SIZE ((__SI_MAX_SIZE / sizeof (int)) - 3) >> # endif >> >> -typedef struct siginfo >> +typedef struct >> { >> int si_signo; /* Signal number. */ >> int si_errno; /* If non-zero, an errno value associated with >> [...] > > This change breaks GCC: > > In file included from > /scratch/tschwing/FM_sh-linux-gnu-mk2/src/gcc-mainline/libgcc/unwind-dw2.c:377:0: > ./md-unwind-support.h: In function 'sh_fallback_frame_state': > ./md-unwind-support.h:182:17: error: field 'info' has incomplete type > > In my case, this is really libgcc/config/sh/linux-unwind.h: > > [...] > 181 struct rt_sigframe { > 182 struct siginfo info; > 183 struct ucontext uc; > 184 } *rt_ = context->cfa; > [...] POSIX says you get "siginto_t" *not* "struct siginfo," please fix the code. > (Is there really nobody doing nightly testing of GCC against glibc master > branch on x86, which would have shown this earlier?) I don't test building GCC against glibc master unless I'm making a potentially ABI breaking change. We should be rebuilding *all* of userspace when glibc changes. It would be nice if we setup an OpenEmbedded system to rebuild as much of x86-64 userspace as possible against a new glibc and check for regressions. > Replacing struct siginfo with siginfo_t in sh/linux-unwind.h makes the > build pass. Is this the desired approach; shall we apply this for all > cases listed above (and submit to Boehm GC upstream)? I wonder -- if GCC > breaks, how much software out in the wild is going to break once this > glibc change ripples through? You can't know until you try building a full distribution. Cheers, Carlos.