Xorg is broken (was: Re: Undefined symbol "shadowUpdatePacked"

2017-03-30 Thread Henk van Oers


I have installed FreeBSD on an other machine,
... same problem.

What other GUI can I try?

Please help, I only need a terminal
bigger than 80X24


On Wed, 29 Mar 2017, Jan Beich wrote:


Henk van Oers  writes:


[   707.153] (EE) Failed to load /usr/local/lib/xorg/modules/drivers/modesetting_drv.so: 
/usr/local/lib/xorg/modules/drivers/modesetting_drv.so: Undefined symbol 
"shadowUpdatePacked"


See bug 218153 for workarounds. Something in your xorg.conf(5) prevents
dependent modules from being loaded.

Either remove xorg.conf or limit it to "Device" section that specifies
the desired Driver and maybe BusID.


___
freebsd-ports@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "freebsd-ports-unsubscr...@freebsd.org"


Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)

2017-03-30 Thread Mark Millard
On 2017-Mar-30, at 1:22 PM, Mark Millard  wrote:

> On 2017-Mar-29, at 8:53 AM, Brooks Davis  wrote:
> 
>> On Mon, Mar 27, 2017 at 03:25:04AM -0700, Mark Millard wrote:
>>> On 2017-Mar-27, at 2:41 AM, Dimitry Andric  wrote:
>>> 
 On 26 Mar 2017, at 23:36, Mark Millard  wrote:
> 
> I upgraded from llvm40 r4 to final. An interesting result was
> its creation of a backup package for llvm40-4.0.0.r4:
> 
> about 13 cpu-core-hours running pkg create
> 
> (Remember: I've been building with WITH_DEBUG= ) Its
> single-threaded status stands out via elapsed time
> approximately matching.
> 
> I'll note that it was somewhat under 6 elapsed hours for
> staging to have been populated (-j4 with 4 cores present
> helps for this part).
> 
> (Of course these elapsed-time figures are rather system
> dependent, although the ratio might be more stable.)
> 
> 
> 
> Also interesting was:
> 
> Installed packages to be REMOVED:
>   llvm40-4.0.0.r4
> 
> Number of packages to be removed: 1
> 
> The operation will free 49 GiB.
 
 Yes, this is big.  But there is no real need to build the llvm ports
 with debug information, unless you want to hack on llvm itself.  And
 in that case, you are better served by a Subversion checkout or Git
 clone from upstream instead.
 
 -Dimitry
>>> 
>>> FYI:
>>> 
>>> Historically unless something extreme like this ends up
>>> involved I build everything using WITH_DEBUG=  or explicit
>>> -g's in order to have better information on any failure.
>>> 
>>> This is extreme enough that next time I synchronize
>>> /usr/ports and it has a devel/llvm40 update I'll
>>> likely rebuild devel/llvm40 without using WITH_DEBUG= .
>>> I'm more concerned with the time it takes than with
>>> the file system space involved.
>> 
>> In the case of LLVM, enabling debug builds does a LOT more than adding
>> symbols.  It's much more like enabling WITNESS and INVARIANTS in your
>> kernel, except that the performance of the resulting binary is much
>> worse than a WITNESS kernel (more like 10x slowdown).
>> 
>> As Dimitry points out, these builds are of questionable value in ports
>> so garbage collecting the knob might be the sensable thing to do.
> 
> Sounds like the ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG technique
> would not change the "WITNESS and INVARIANTS"-like part of the
> issue. In fact if WITH_DEBUG= causes the cmake debug-style
> llvm40 build ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG might not
> make any difference: separate enforcing of lack of optimization.
> 
> But just to see what results I've done "pkg delete llvm40"
> and am doing another build with ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG=
> and its supporting code in place in addition to using WITH_DEBUG=
> as the type of build fro FreeBSD's viewpoint.
> 
> If you know that the test is a waste of machine cycles, you can
> let me know if you want.

The experiment showed that ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG
use made no difference for devel/llvm40 so devel/llvm40 itself
has to change such as what Dimitry Andric reported separately
as a working change to the Makefile .

(ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG would still have its uses
for various other ports.)

===
Mark Millard
markmi at dsl-only.net



___
freebsd-ports@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "freebsd-ports-unsubscr...@freebsd.org"


Re: Recent devel/libclc update breaking graphics/dri - it seems

2017-03-30 Thread Walter Schwarzenfeld

Try

Makefile:

+MESA_LLVM_VER= 39

___
freebsd-ports@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "freebsd-ports-unsubscr...@freebsd.org"


Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)

2017-03-30 Thread Dimitry Andric
On 30 Mar 2017, at 19:55, Brooks Davis  wrote:
> 
> On Thu, Mar 30, 2017 at 07:26:19PM +0200, Dimitry Andric wrote:
...
>> 
>> As said, this is because of WITH_DEBUG.  Don't use that for the llvm
>> ports, for now.  It will also allow you to build them with much less RAM
>> in the machine: especially linking can take multiple GB when debuginfo
>> is enabled, and optimization is off.
> 
> I'm looking into improving WITH_DEBUG.

I stole the following method from graphics/darktable:

Index: devel/llvm40/Makefile
===
--- devel/llvm40/Makefile   (revision 436685)
+++ devel/llvm40/Makefile   (working copy)
@@ -236,6 +236,11 @@ NOT_FOR_ARCH=  ia64

 .include 

+.if defined(WITH_DEBUG)
+CMAKE_BUILD_TYPE=  RelWithDebInfo
+STRIP=
+.endif
+
 _CRTLIBDIR=
${LLVM_PREFIX:S|${PREFIX}/||}/lib/clang/${LLVM_RELEASE}/lib/freebsd
 .if ${ARCH} == "amd64"
 _COMPILER_RT_LIBS= \

This appears to work for me.


> P.S. Somewhat off topice, but related.  FAIR WARNING: the days of
> self-hosted 32-bit systems are numbered.  Switching to lld from our
> ancient BFD linker will probably buy us some time, but I'd be surprised
> if you will be able to build LLVM+CLANG with a 2GB address space in 5
> years.  The sooner people make their peace with this, the better.

Yes, with that above RelWithDebInfo change, GNU ld tends to use ~5G of
memory to link the larger llvm executables, so that is definitely beyond
i386, even if you run it in a jail on an amd64 host.

And if you would want to use link time optimization, the requirements
will increase even more...

-Dimitry



signature.asc
Description: Message signed with OpenPGP


Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)

2017-03-30 Thread Mark Millard
On 2017-Mar-30, at 10:55 AM, Brooks Davis  wrote:

> P.S. Somewhat off topice, but related.  FAIR WARNING: the days of
> self-hosted 32-bit systems are numbered.  Switching to lld from our
> ancient BFD linker will probably buy us some time, but I'd be surprised
> if you will be able to build LLVM+CLANG with a 2GB address space in 5
> years.  The sooner people make their peace with this, the better.

Yep.

It fights with time preferences as well: when I tried
building gcc6 "full bootstrap" via poudriere cross-
builds on amd64 (4 cores/threads used) and native on a
bpim3 (-mcpu=cortex-a7 with 4 cores supported by FreeBSD
and 2GB if RAM) the native build was much faster as I
remember. Of course once the cross build was using the
gcc6 internal bootstrap compiler not much was running
native cross-toolchain materials. (Building that internal
bootstrap compiler did have a native-clang cross-compiler
involved.)

[I do not have access to server-class thread counts or
RAM either. And the non-multithread stages contribute
even in those contexts as well.]

So I'm not looking forward to the issue from that
point of view.

===
Mark Millard
markmi at dsl-only.net

___
freebsd-ports@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "freebsd-ports-unsubscr...@freebsd.org"


Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)

2017-03-30 Thread Mark Millard
On 2017-Mar-29, at 8:53 AM, Brooks Davis  wrote:

> On Mon, Mar 27, 2017 at 03:25:04AM -0700, Mark Millard wrote:
>> On 2017-Mar-27, at 2:41 AM, Dimitry Andric  wrote:
>> 
>>> On 26 Mar 2017, at 23:36, Mark Millard  wrote:
 
 I upgraded from llvm40 r4 to final. An interesting result was
 its creation of a backup package for llvm40-4.0.0.r4:
 
 about 13 cpu-core-hours running pkg create
 
 (Remember: I've been building with WITH_DEBUG= ) Its
 single-threaded status stands out via elapsed time
 approximately matching.
 
 I'll note that it was somewhat under 6 elapsed hours for
 staging to have been populated (-j4 with 4 cores present
 helps for this part).
 
 (Of course these elapsed-time figures are rather system
 dependent, although the ratio might be more stable.)
 
 
 
 Also interesting was:
 
 Installed packages to be REMOVED:
llvm40-4.0.0.r4
 
 Number of packages to be removed: 1
 
 The operation will free 49 GiB.
>>> 
>>> Yes, this is big.  But there is no real need to build the llvm ports
>>> with debug information, unless you want to hack on llvm itself.  And
>>> in that case, you are better served by a Subversion checkout or Git
>>> clone from upstream instead.
>>> 
>>> -Dimitry
>> 
>> FYI:
>> 
>> Historically unless something extreme like this ends up
>> involved I build everything using WITH_DEBUG=  or explicit
>> -g's in order to have better information on any failure.
>> 
>> This is extreme enough that next time I synchronize
>> /usr/ports and it has a devel/llvm40 update I'll
>> likely rebuild devel/llvm40 without using WITH_DEBUG= .
>> I'm more concerned with the time it takes than with
>> the file system space involved.
> 
> In the case of LLVM, enabling debug builds does a LOT more than adding
> symbols.  It's much more like enabling WITNESS and INVARIANTS in your
> kernel, except that the performance of the resulting binary is much
> worse than a WITNESS kernel (more like 10x slowdown).
> 
> As Dimitry points out, these builds are of questionable value in ports
> so garbage collecting the knob might be the sensable thing to do.

Sounds like the ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG technique
would not change the "WITNESS and INVARIANTS"-like part of the
issue. In fact if WITH_DEBUG= causes the cmake debug-style
llvm40 build ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG might not
make any difference: separate enforcing of lack of optimization.

But just to see what results I've done "pkg delete llvm40"
and am doing another build with ALLOW_OPTIMIZATIONS_FOR_WITH_DEBUG=
and its supporting code in place in addition to using WITH_DEBUG=
as the type of build fro FreeBSD's viewpoint.

If you know that the test is a waste of machine cycles, you can
let me know if you want.


===
Mark Millard
markmi at dsl-only.net


___
freebsd-ports@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "freebsd-ports-unsubscr...@freebsd.org"


Re: Undefined symbol "shadowUpdatePacked"

2017-03-30 Thread Henk van Oers


On Wed, 29 Mar 2017, Jan Beich wrote:


Henk van Oers  writes:


[   707.153] (EE) Failed to load /usr/local/lib/xorg/modules/drivers/modesetting_drv.so: 
/usr/local/lib/xorg/modules/drivers/modesetting_drv.so: Undefined symbol 
"shadowUpdatePacked"


See bug 218153 for workarounds. Something in your xorg.conf(5) prevents
dependent modules from being loaded.


I did not have any xorg.conf


Either remove xorg.conf or limit it to "Device" section that specifies
the desired Driver and maybe BusID.


Creating a xorg.conf (not using the broken Xorg -configure)
results in:
Fatal server error:
(EE) no screens found(EE)

Please make simple VGA or "vesa" working again.

___
freebsd-ports@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "freebsd-ports-unsubscr...@freebsd.org"


Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)

2017-03-30 Thread Brooks Davis
On Thu, Mar 30, 2017 at 07:26:19PM +0200, Dimitry Andric wrote:
> On 30 Mar 2017, at 19:06, Alexey Dokuchaev  wrote:
> > 
> > On Mon, Mar 27, 2017 at 11:41:40AM +0200, Dimitry Andric wrote:
> >> On 26 Mar 2017, at 23:36, Mark Millard  wrote:
> >>> ...
> >>> Also interesting was:
> >>> 
> >>> Installed packages to be REMOVED:
> >>>   llvm40-4.0.0.r4
> >>> 
> >>> Number of packages to be removed: 1
> >>> 
> >>> The operation will free 49 GiB.
> >> 
> >> Yes, this is big.  But there is no real need to build the llvm ports
> >> with debug information, unless you want to hack on llvm itself.
> > 
> > Cc'ing jmd@ and rezny@.
> > 
> > I've been watching increasing size of our LLVM packages with increasing
> > worry.  This is from my tinderbox cache:
> > 
> >  $ % env LANG=C ls -lh llvm3*
> >  -rw-r--r--  1 root  wheel17M Jan 29  2016 llvm35-3.5.2_1.txz
> >  -rw-r--r--  1 root  wheel18M Mar  7  2016 llvm36-3.6.2_2.txz
> >  -rw-r--r--  1 root  wheel27M Feb 28 01:05 llvm37-3.7.1_4.txz
> >  -rw-r--r--  1 root  wheel   207M Jan 19 18:20 llvm38-3.8.1_5.txz
> >  -rw-r--r--  1 root  wheel   244M Mar 23 16:42 llvm39-3.9.1_2.txz
> > 
> > Dimitry, do you know what had causes such a huge bump in 37 -> 38?
> 
> Yes, up to llvm37, the ports were split in devel/llvmXY and lang/clangXY
> parts, with separate ports for e.g. compiler-rt and other LLVM projects.

It's mostly that we build both shared and static libraries.  llvm37
merged clang and llvm (with a clang37 metaport).  Dynamic builds were
broken for a while too which blew up program size.

> > They take lots of time to build and package.  And given that llvm
> > is indirect dependency of any X11-related port, it pessimises their
> > build times as well (devel/libclc now requires devel/llvm40 after
> > r437268).
> 
> The previous split looks pretty hard to maintain, so that is most likely
> the reason for combining all components in one port after 3.8.
> Unfortunately the side effect is that it is way less granular.

I kept it up for several revisions past when it was desupported, but
it's absolutly impossible with the cmake infrastructure unless we want
want to build all of LLVM four times to get clang, llvm, lldb, and lld.
I'm pretty sure that would case more complaints. :)

> If we ever get infrastructure for generating multiple packages out of
> one port, the devel/llvm* ports are very good candidates. :)

Very much so.

> > With 49 GiB llvm40, I guess I won't be able to build-test post as my
> > hardware would just not be capable enough.
> 
> As said, this is because of WITH_DEBUG.  Don't use that for the llvm
> ports, for now.  It will also allow you to build them with much less RAM
> in the machine: especially linking can take multiple GB when debuginfo
> is enabled, and optimization is off.

I'm looking into improving WITH_DEBUG.

-- Brooks

P.S. Somewhat off topice, but related.  FAIR WARNING: the days of
self-hosted 32-bit systems are numbered.  Switching to lld from our
ancient BFD linker will probably buy us some time, but I'd be surprised
if you will be able to build LLVM+CLANG with a 2GB address space in 5
years.  The sooner people make their peace with this, the better.


signature.asc
Description: PGP signature


Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)

2017-03-30 Thread Dimitry Andric
On 30 Mar 2017, at 19:06, Alexey Dokuchaev  wrote:
> 
> On Mon, Mar 27, 2017 at 11:41:40AM +0200, Dimitry Andric wrote:
>> On 26 Mar 2017, at 23:36, Mark Millard  wrote:
>>> ...
>>> Also interesting was:
>>> 
>>> Installed packages to be REMOVED:
>>> llvm40-4.0.0.r4
>>> 
>>> Number of packages to be removed: 1
>>> 
>>> The operation will free 49 GiB.
>> 
>> Yes, this is big.  But there is no real need to build the llvm ports
>> with debug information, unless you want to hack on llvm itself.
> 
> Cc'ing jmd@ and rezny@.
> 
> I've been watching increasing size of our LLVM packages with increasing
> worry.  This is from my tinderbox cache:
> 
>  $ % env LANG=C ls -lh llvm3*
>  -rw-r--r--  1 root  wheel17M Jan 29  2016 llvm35-3.5.2_1.txz
>  -rw-r--r--  1 root  wheel18M Mar  7  2016 llvm36-3.6.2_2.txz
>  -rw-r--r--  1 root  wheel27M Feb 28 01:05 llvm37-3.7.1_4.txz
>  -rw-r--r--  1 root  wheel   207M Jan 19 18:20 llvm38-3.8.1_5.txz
>  -rw-r--r--  1 root  wheel   244M Mar 23 16:42 llvm39-3.9.1_2.txz
> 
> Dimitry, do you know what had causes such a huge bump in 37 -> 38?

Yes, up to llvm37, the ports were split in devel/llvmXY and lang/clangXY
parts, with separate ports for e.g. compiler-rt and other LLVM projects.

For llvm38 and later, the devel/llvmXY port contains almost *all*
upstream LLVM components, which are then selectable at port configure
time.  For instance, devel/llvm40 shows:

┌─── llvm40-4.0.0 ─┐
│ ┌──┐ │
│ │ [x] CLANGBuild clang │ │
│ │ [x] COMPILER_RT  Sanitizer libraries │ │
│ │ [x] DOCS Build and/or install documentation  │ │
│ │ [x] EXTRAS   Extra clang tools   │ │
│ │ [x] GOLD Build the LLVM Gold plugin for LTO  │ │
│ │ [x] LIT  Install lit and FileCheck test tools│ │
│ │ [x] LLD  Install lld, the LLVM linker│ │
│ │ [x] LLDB Install lldb, the LLVM debugger (ignored on 9.x)│ │
│ │ [x] OPENMP   Install libomp, the LLVM OpenMP runtime library │ │
│ └──┘ │
├──┤
│   <  OK  >   │
└──┘

If you want to reduce the size of the package, only select the part(s)
you need.  I think you can get by with just the CLANG option, for most
dependent ports.



> They take lots of time to build and package.  And given that llvm
> is indirect dependency of any X11-related port, it pessimises their
> build times as well (devel/libclc now requires devel/llvm40 after
> r437268).

The previous split looks pretty hard to maintain, so that is most likely
the reason for combining all components in one port after 3.8.
Unfortunately the side effect is that it is way less granular.

If we ever get infrastructure for generating multiple packages out of
one port, the devel/llvm* ports are very good candidates. :)


> With 49 GiB llvm40, I guess I won't be able to build-test post as my
> hardware would just not be capable enough.

As said, this is because of WITH_DEBUG.  Don't use that for the llvm
ports, for now.  It will also allow you to build them with much less RAM
in the machine: especially linking can take multiple GB when debuginfo
is enabled, and optimization is off.

-Dimitry



signature.asc
Description: Message signed with OpenPGP


Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)

2017-03-30 Thread Matthew Rezny
On Thursday 30 March 2017 17:06:48 Alexey Dokuchaev wrote:
> On Mon, Mar 27, 2017 at 11:41:40AM +0200, Dimitry Andric wrote:
> > On 26 Mar 2017, at 23:36, Mark Millard  wrote:
> > > ...
> > > Also interesting was:
> > > 
> > > Installed packages to be REMOVED:
> > >   llvm40-4.0.0.r4
> > > 
> > > Number of packages to be removed: 1
> > > 
> > > The operation will free 49 GiB.
> > 
> > Yes, this is big.  But there is no real need to build the llvm ports
> > with debug information, unless you want to hack on llvm itself.
> 
> Cc'ing jmd@ and rezny@.
> 
> I've been watching increasing size of our LLVM packages with increasing
> worry.  This is from my tinderbox cache:
> 
>   $ % env LANG=C ls -lh llvm3*
>   -rw-r--r--  1 root  wheel17M Jan 29  2016 llvm35-3.5.2_1.txz
>   -rw-r--r--  1 root  wheel18M Mar  7  2016 llvm36-3.6.2_2.txz
>   -rw-r--r--  1 root  wheel27M Feb 28 01:05 llvm37-3.7.1_4.txz
>   -rw-r--r--  1 root  wheel   207M Jan 19 18:20 llvm38-3.8.1_5.txz
>   -rw-r--r--  1 root  wheel   244M Mar 23 16:42 llvm39-3.9.1_2.txz
> 
> Dimitry, do you know what had causes such a huge bump in 37 -> 38?
> 
> They take lots of time to build and package.  And given that llvm
> is indirect dependency of any X11-related port, it pessimises their
> build times as well (devel/libclc now requires devel/llvm40 after
> r437268).
> 
> With 49 GiB llvm40, I guess I won't be able to build-test post as my
> hardware would just not be capable enough.
> 
> ./danfe

LLVM 3.8 introduced the option to build a shared LLVM library, which is what 
Mesa needs for use at runtime (for e.g. compiling shaders), separate from 
linking to it. Previous versions only had one option, if the library was built 
then all the LLVM binaries were staticly linked to it.

LLVM devs state that static linking the LLVM binaries is only for developer 
use, users should not do that. Mesa's need was causing distributions to ship 
static linked LLVM binaries against that advice. So, they added a pair of 
switches so that we can separately control whether that library is built 
(required for Mesa) and used to link LLVM binaries (not desired).

llvm{35,36,37} are statically linked and thus smaller than normal. llvm38 
switched to dynamic linking, the default, thus the size grew. llvm39 added the 
library Mesa needs (we didn't turn on the option in llvm38 since Mesa jumped 
from 37 to 39), so it grew a little more. I assume llvm40 will be a bit 
bigger, but do not expect to see another jump as you've observed.

___
freebsd-ports@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "freebsd-ports-unsubscr...@freebsd.org"


Re: FYI: what it takes for RAM+swap to build devel/llvm40 with 4 processors or cores and WITH__DEBUG= (powerpc64 example)

2017-03-30 Thread Alexey Dokuchaev
On Mon, Mar 27, 2017 at 11:41:40AM +0200, Dimitry Andric wrote:
> On 26 Mar 2017, at 23:36, Mark Millard  wrote:
> > ...
> > Also interesting was:
> > 
> > Installed packages to be REMOVED:
> > llvm40-4.0.0.r4
> > 
> > Number of packages to be removed: 1
> > 
> > The operation will free 49 GiB.
> 
> Yes, this is big.  But there is no real need to build the llvm ports
> with debug information, unless you want to hack on llvm itself.

Cc'ing jmd@ and rezny@.

I've been watching increasing size of our LLVM packages with increasing
worry.  This is from my tinderbox cache:

  $ % env LANG=C ls -lh llvm3*
  -rw-r--r--  1 root  wheel17M Jan 29  2016 llvm35-3.5.2_1.txz
  -rw-r--r--  1 root  wheel18M Mar  7  2016 llvm36-3.6.2_2.txz
  -rw-r--r--  1 root  wheel27M Feb 28 01:05 llvm37-3.7.1_4.txz
  -rw-r--r--  1 root  wheel   207M Jan 19 18:20 llvm38-3.8.1_5.txz
  -rw-r--r--  1 root  wheel   244M Mar 23 16:42 llvm39-3.9.1_2.txz

Dimitry, do you know what had causes such a huge bump in 37 -> 38?

They take lots of time to build and package.  And given that llvm
is indirect dependency of any X11-related port, it pessimises their
build times as well (devel/libclc now requires devel/llvm40 after
r437268).

With 49 GiB llvm40, I guess I won't be able to build-test post as my
hardware would just not be capable enough.

./danfe
___
freebsd-ports@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "freebsd-ports-unsubscr...@freebsd.org"


Re: textproc/kibana50 and version of node.js

2017-03-30 Thread Tom Judge

> On Mar 30, 2017, at 11:27 AM, Miroslav Lachman <000.f...@quip.cz> wrote:
> 
> Bradley T. Hughes wrote on 2017/03/30 11:10:
>> 
>>> On 30 Mar 2017, at 10:00, Miroslav Lachman <000.f...@quip.cz> wrote:
>>> 
>>> Hi,
>> 
>> Hi! :)
>> 
>>> we are using npm + node in version 7.8.0 and ElasticSearch in version 
>>> 5.0.2. Now we need Kibana and Kibana X-Pack but official Kibana has bundled 
>>> node v0.10 or v0.12, FreeBSD port of Kibana has dependency on node v4. It 
>>> is conflicting with already installed and used node v7.
>> 
>> This mail caught my eye. Since Node.js v0.10 and v0.12 have reached 
>> end-of-life, I wanted to see if I could find out why they were bundling such 
>> an old version. Looking at Github, it actually looks like Kibana is bundled 
>> with Node.js v6.9.5: 
>> https://github.com/elastic/kibana/blob/v5.3.0/package.json#L243-L246
> 
> I am sorry, it was my fault. It was information for older version of Kibana 
> (4.4.1, 4.3.2, 4.1.5)
> https://discuss.elastic.co/t/kibana-4-4-1-4-3-2-4-1-5-updated-node-js-versions-due-to-upstream-vulnerabilities/41643
> 
> I created Kibana 5.0.2 package with poudriere with modified Makefile to:
> 
> RUN_DEPENDS=  node>=0.8.0:www/node
> 
> Kibana is up and running with Node.js 7.8. I hope it will be stable :)
> 
> Miroslav Lachman


Please open a bug with this information and I will include it in the port with 
the next update to 5.3.

Tom
___
freebsd-ports@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "freebsd-ports-unsubscr...@freebsd.org"


Re: textproc/kibana50 and version of node.js

2017-03-30 Thread Miroslav Lachman

Bradley T. Hughes wrote on 2017/03/30 11:10:



On 30 Mar 2017, at 10:00, Miroslav Lachman <000.f...@quip.cz> wrote:

Hi,


Hi! :)


we are using npm + node in version 7.8.0 and ElasticSearch in version 5.0.2. 
Now we need Kibana and Kibana X-Pack but official Kibana has bundled node v0.10 
or v0.12, FreeBSD port of Kibana has dependency on node v4. It is conflicting 
with already installed and used node v7.


This mail caught my eye. Since Node.js v0.10 and v0.12 have reached 
end-of-life, I wanted to see if I could find out why they were bundling such an 
old version. Looking at Github, it actually looks like Kibana is bundled with 
Node.js v6.9.5: 
https://github.com/elastic/kibana/blob/v5.3.0/package.json#L243-L246


I am sorry, it was my fault. It was information for older version of 
Kibana (4.4.1, 4.3.2, 4.1.5)

https://discuss.elastic.co/t/kibana-4-4-1-4-3-2-4-1-5-updated-node-js-versions-due-to-upstream-vulnerabilities/41643

I created Kibana 5.0.2 package with poudriere with modified Makefile to:

RUN_DEPENDS=node>=0.8.0:www/node

Kibana is up and running with Node.js 7.8. I hope it will be stable :)

Miroslav Lachman
___
freebsd-ports@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "freebsd-ports-unsubscr...@freebsd.org"


!!!! Greetings !!!!

2017-03-30 Thread kong khemara
Hello,
I am Barr Kong Khemara, I humbly ask if you are related to my client who died 
couple of
years ago in a car accident here in my country Cambodia. I wish to also inquire 
if
it is possible to have different families with the same last name as yours by 
coincidence
who do not share the same common roots? Kindly get back to me if your email is 
still
Valid to enable me give you the details of my message or make headway in my 
search.
Regards,
Kong Khemara
___
freebsd-ports@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "freebsd-ports-unsubscr...@freebsd.org"


Recent devel/libclc update breaking graphics/dri - it seems

2017-03-30 Thread Tommy Scheunemann

Hi every1,

it seems - hopefully not just for me (sorry) - the recent devel/libclc 
update breaks graphics/dri.
Upon updating libclc, llvm40 gets lurked into the system and graphics/dri 
gets rebuilt, using llvm40 - even if 3.9 is installed - for compilation.

The compile then stops with.

--- SNIP ---
In file included from draw/draw_llvm.c:45:
./gallivm/lp_bld_intr.h:69:20: error: unknown type name 'LLVMAttribute'; 
did you mean 'LLVMAttribu

teRef'?
   LLVMAttribute attr);
   ^
   LLVMAttributeRef
/usr/local/llvm40/include/llvm-c/Types.h:116:40: note: 'LLVMAttributeRef' 
declared here

typedef struct LLVMOpaqueAttributeRef *LLVMAttributeRef;
   ^
draw/draw_llvm.c:1577:10: error: implicit declaration of function 
'LLVMAddAttribute' is invalid in

 C99 [-Werror,-Wimplicit-function-declaration]
 LLVMAddAttribute(LLVMGetParam(variant_func, i),
--- SNIP ---

Even manually upgrading libclc, deinstalling llvm40 and reinstalling 
llvm39, at the latest when it comes to graphics/dri llvm40 gets pulled in 
again and the compile run fails.


My guess (guess, crytal ball, lottery) is the update of libclc breaks 
graphics/dri due to llvm40 dependencies.


Any useful idea on how to fix this ? My second guess (again with a crystal 
ball playing lottery) is that mesa 13.0.6 isn't playing nice with llvm4 
atm.


Kind regards
___
freebsd-ports@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "freebsd-ports-unsubscr...@freebsd.org"


FreeBSD ports you maintain which are out of date

2017-03-30 Thread portscout
Dear port maintainer,

The portscout new distfile checker has detected that one or more of your
ports appears to be out of date. Please take the opportunity to check
each of the ports listed below, and if possible and appropriate,
submit/commit an update. If any ports have already been updated, you can
safely ignore the entry.

You will not be e-mailed again for any of the port/version combinations
below.

Full details can be found at the following URL:
http://portscout.freebsd.org/po...@freebsd.org.html


Port| Current version | New version
+-+
devel/ocaml-pomap   | 3.0.6   | v3.0.7
+-+
emulators/mame  | 0.166   | mame0184
+-+
emulators/mess  | 0.166   | mame0184
+-+
www/surf| 0.7 | 2.0
+-+


If any of the above results are invalid, please check the following page
for details on how to improve portscout's detection and selection of
distfiles on a per-port basis:

http://portscout.freebsd.org/info/portscout-portconfig.txt

Thanks.
___
freebsd-ports@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "freebsd-ports-unsubscr...@freebsd.org"


Re: textproc/kibana50 and version of node.js

2017-03-30 Thread Bradley T. Hughes

> On 30 Mar 2017, at 10:00, Miroslav Lachman <000.f...@quip.cz> wrote:
> 
> Hi,

Hi! :)

> we are using npm + node in version 7.8.0 and ElasticSearch in version 5.0.2. 
> Now we need Kibana and Kibana X-Pack but official Kibana has bundled node 
> v0.10 or v0.12, FreeBSD port of Kibana has dependency on node v4. It is 
> conflicting with already installed and used node v7.

This mail caught my eye. Since Node.js v0.10 and v0.12 have reached 
end-of-life, I wanted to see if I could find out why they were bundling such an 
old version. Looking at Github, it actually looks like Kibana is bundled with 
Node.js v6.9.5: 
https://github.com/elastic/kibana/blob/v5.3.0/package.json#L243-L246

> Is it possible to run Kibana with node v7 or it will not work at all?

> According to this https://www.elastic.co/guide/en/kibana/current/setup.html
> "Running Kibana against a separately maintained version of Node.js is not 
> supported."

I do not know the answer to that question, unfortunately. :(

> Kind regards
> 
> Miroslav Lachman
> ___
> freebsd-ports@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-ports
> To unsubscribe, send any mail to "freebsd-ports-unsubscr...@freebsd.org"

--
Bradley T. Hughes
bradleythug...@fastmail.fm

___
freebsd-ports@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "freebsd-ports-unsubscr...@freebsd.org"


textproc/kibana50 and version of node.js

2017-03-30 Thread Miroslav Lachman

Hi,
we are using npm + node in version 7.8.0 and ElasticSearch in version 
5.0.2. Now we need Kibana and Kibana X-Pack but official Kibana has 
bundled node v0.10 or v0.12, FreeBSD port of Kibana has dependency on 
node v4. It is conflicting with already installed and used node v7.


Is it possible to run Kibana with node v7 or it will not work at all?

According to this https://www.elastic.co/guide/en/kibana/current/setup.html
"Running Kibana against a separately maintained version of Node.js is 
not supported."


Kind regards

Miroslav Lachman
___
freebsd-ports@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "freebsd-ports-unsubscr...@freebsd.org"


Prevent Jar file from extracting

2017-03-30 Thread zack wylde

___
freebsd-ports@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "freebsd-ports-unsubscr...@freebsd.org"