Re: Any intent to implement the W3C generic sensor API in Firefox?

2017-11-08 Thread Anne van Kesteren
On Thu, Nov 9, 2017 at 6:29 AM, Boris Zbarsky  wrote:
> On 11/3/17 3:00 PM, Jonathan Kingston wrote:
>> Which specific issues?
>> The API in the specification is promise ready by using the permissions
>> API,
>> is behind a permission prompt and requires a secure context.
>
> Does that address the privacy concerns we had with device APIs before?

Only if we know what question to pose to users and make it clear what
the implications of their choice are. I haven't seen as much as a
mockup, but if Chrome is indeed planning on doing this we might see
something soon?


-- 
https://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Any intent to implement the W3C generic sensor API in Firefox?

2017-11-08 Thread Boris Zbarsky

On 11/3/17 3:00 PM, Jonathan Kingston wrote:

Which specific issues?
The API in the specification is promise ready by using the permissions API,
is behind a permission prompt and requires a secure context.


Does that address the privacy concerns we had with device APIs before?

-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Are char* and uint8_t* interchangeable?

2017-11-08 Thread Kris Maglione

On Wed, Nov 08, 2017 at 08:09:17PM -0800, gsquel...@mozilla.com wrote:

On Thursday, November 9, 2017 at 1:11:20 PM UTC+11, Kris Maglione wrote:

On Wed, Nov 08, 2017 at 06:04:27PM -0800, jww...@mozilla.com wrote:
>Is it always safe and portable to do:
>
>char* p1 = ...;
>uint8_t* p2 = reinterpret_cast(p1);
>uint8_t u8 = p2[0];
>
>without breaking strict aliasing?

Strict aliasing permits any typed data to be accessed as char*,
so yes, this is always safe and portable. Though they aren't
strictly interchangeable.


Kris, if you look at the code sample, it's doing the reverse: Accessing char* data 
as uint8_t*. Is *that* safe?


The point is that strict aliasing allows char* data to alias any other 
type. If it helps, you can think of it as a char* accessing uint8_t* 
data, rather than the other way around.


Although, in this particular case, it's safe either way, since pointers 
to signed and unsigned variants of the same type are always allowed to 
alias each other.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Are char* and uint8_t* interchangeable?

2017-11-08 Thread gsquelart
On Thursday, November 9, 2017 at 1:11:20 PM UTC+11, Kris Maglione wrote:
> On Wed, Nov 08, 2017 at 06:04:27PM -0800, jww...@mozilla.com wrote:
> >Is it always safe and portable to do:
> >
> >char* p1 = ...;
> >uint8_t* p2 = reinterpret_cast(p1);
> >uint8_t u8 = p2[0];
> >
> >without breaking strict aliasing?
> 
> Strict aliasing permits any typed data to be accessed as char*, 
> so yes, this is always safe and portable. Though they aren't 
> strictly interchangeable.

Kris, if you look at the code sample, it's doing the reverse: Accessing char* 
data as uint8_t*. Is *that* safe?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Are char* and uint8_t* interchangeable?

2017-11-08 Thread Kris Maglione

On Wed, Nov 08, 2017 at 06:04:27PM -0800, jww...@mozilla.com wrote:

Is it always safe and portable to do:

char* p1 = ...;
uint8_t* p2 = reinterpret_cast(p1);
uint8_t u8 = p2[0];

without breaking strict aliasing?


Strict aliasing permits any typed data to be accessed as char*, 
so yes, this is always safe and portable. Though they aren't 
strictly interchangeable.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Are char* and uint8_t* interchangeable?

2017-11-08 Thread jwwang
Is it always safe and portable to do:

char* p1 = ...;
uint8_t* p2 = reinterpret_cast(p1);
uint8_t u8 = p2[0];

without breaking strict aliasing?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to unship: as in image maps

2017-11-08 Thread Emilio Cobos Álvarez
Hi,

In bug 1317937 I intend to unship the feature of  elements acting the
same way as  elements in image maps.

This functionality was specced in HTML 4, but no other browser
implemented it and was removed from HTML 5.

Timothy (:tnikkel) tried to do it before, but it got blocked on getting
telemetry for this (bug 1350532).

Given that didn't advance for 8 months, that it blocks (or at least
simplifies a lot) longstanding bug 135040, which always bites again and
is a nice source of FIXME comments[1], and the fact that this is not
implemented anywhere else and is not in any spec anymore, we think it's
reasonable to just remove it, and we don't expect any webcompat fallout
from it.

Let me know if there's any concern about doing this.

 -- Emilio

[1]: http://searchfox.org/mozilla-central/search?q=135040
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: JSBC: JavaScript Start-up Bytecode Cache

2017-11-08 Thread dmosedale
Does this cache bytecode for about: pages as well?  As an example, caching 
bytecode for various JS scripts from resource: and chrome: for about:home might 
get interesting startup improvements...

Thanks,
Dan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Bigger hard drives wanted (was Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux))

2017-11-08 Thread Gregory Szorc
This thread is good feedback. I think changing the default to a 1TB SSD is
a reasonable request.

Please send any future comments regarding hardware to Sophana (
s...@mozilla.com) to increase the chances that feedback is acted on.

On Wed, Nov 8, 2017 at 9:09 AM, Julian Seward  wrote:

> On 08/11/17 17:28, Boris Zbarsky wrote:
>
> > The last desktop I was shipped came with a 512 GB drive.  [..]
> >
> > In practice, I routinely run out of disk space and have to delete
> > objdirs and rebuild them the next day, because I have to build
> > something else in a different srcdir...
>
> I totally agree.  I had a machine with a 512GB SSD and wound up in the
> same endless juggle/compress/delete-and-rebuild game.  I got a new machine
> with a 512GB SSD *and* a 1T HDD, and that helps a lot, although the perf
> hit from the HDD especially when linking libxul is terrible.
>
> J
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Bigger hard drives wanted (was Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux))

2017-11-08 Thread Julian Seward
On 08/11/17 17:28, Boris Zbarsky wrote:

> The last desktop I was shipped came with a 512 GB drive.  [..]
>
> In practice, I routinely run out of disk space and have to delete
> objdirs and rebuild them the next day, because I have to build
> something else in a different srcdir...

I totally agree.  I had a machine with a 512GB SSD and wound up in the
same endless juggle/compress/delete-and-rebuild game.  I got a new machine
with a 512GB SSD *and* a 1T HDD, and that helps a lot, although the perf
hit from the HDD especially when linking libxul is terrible.

J
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Bigger hard drives wanted (was Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux))

2017-11-08 Thread Michael de Boer
I’d like to add the VM multiplier: I’m working mainly on OSX and run a Windows 
and a Linux VM in there with their own checkouts and objdirs. Instead of 
allocating a comfortable size virtual disks, I end up resizing them quite 
frequently to avoid running out of space to save as much as possible for OSX.

Mike.

> On 8 Nov 2017, at 17:28, Boris Zbarsky  wrote:
> 
> On 11/7/17 4:13 PM, Sophana "Soap" Aik wrote:
>> Nothing is worse than hearing IT picked or chose hardware that nobody
>> actually wanted or will use.
> 
> If I could interject with a comment about the hardware we pick...
> 
> The last desktop I was shipped came with a 512 GB drive.  One of our srcdirs 
> is about 5-8GB nowadays (we seem to have mach commands that dump large stuff 
> in the srcdir).
> 
> Each objdir is 9+GB at least on Linux.  Figure 25GB for source + opt + debug.
> 
> For the work I do (e.g. backporting security fixes every so often) I need a 
> release tree, a beta tree, and ESR tree, and at least 3 tip trees.  That's at 
> least 150GB.  If I want to have an effective ccache, that's about 20-30GB 
> (recall that each objdir is 9+GB!).  Call it 175GB.
> 
> If I want to dual-boot or have a VM so I can do both Linux and Windows work, 
> that's 350GB.  Plus the actual operating systems involved.  Plus any data 
> files that might be being generated as part of work, etc.
> 
> In practice, I routinely run out of disk space and have to delete objdirs and 
> rebuild them the next day, because I have to build something else in a 
> different srcdir...
> 
> -Boris
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Bigger hard drives wanted (was Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux))

2017-11-08 Thread Boris Zbarsky

On 11/7/17 4:13 PM, Sophana "Soap" Aik wrote:

Nothing is worse than hearing IT picked or chose hardware that nobody
actually wanted or will use.


If I could interject with a comment about the hardware we pick...

The last desktop I was shipped came with a 512 GB drive.  One of our 
srcdirs is about 5-8GB nowadays (we seem to have mach commands that dump 
large stuff in the srcdir).


Each objdir is 9+GB at least on Linux.  Figure 25GB for source + opt + 
debug.


For the work I do (e.g. backporting security fixes every so often) I 
need a release tree, a beta tree, and ESR tree, and at least 3 tip 
trees.  That's at least 150GB.  If I want to have an effective ccache, 
that's about 20-30GB (recall that each objdir is 9+GB!).  Call it 175GB.


If I want to dual-boot or have a VM so I can do both Linux and Windows 
work, that's 350GB.  Plus the actual operating systems involved.  Plus 
any data files that might be being generated as part of work, etc.


In practice, I routinely run out of disk space and have to delete 
objdirs and rebuild them the next day, because I have to build something 
else in a different srcdir...


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Microsoft VMs for testing

2017-11-08 Thread mhoye



On 2017-11-07 5:03 PM, James Graham wrote:

On 07/11/17 13:47, Tom Ritter wrote:

Warning: they auto-shut down after 30 minutes (maybe? I never timed
it). I haven't put any effort into figuring out if that's
configurable, but I don't think it is.


I think that only happens once the trial period expires but you can 
reinstall the VM to reset that (you'll lose state of course).


There's also a full Win10 Developer VM available here:

https://developer.microsoft.com/en-us/windows/downloads/virtual-machines

with a bunch of tools preinstalled that'll run until mid-January. If you 
need access to every version of windows ever, MSDN "Professional" 
subscriptions cost about $550/yr and give you access to basically 
everything.


- mhoye

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-08 Thread Sophana "Soap" Aik
Thanks Jeff, I understand your reasoning. 14 cores vs 10 is definitely
huge.

I will also add, there isn't anything to stop us to having more than one
config, just like we do with laptops.

I'm fortunate to be in this situation to finally help you all have
influence on the type of hardware that makes sense for your use cases.
Nothing is worse than hearing IT picked or chose hardware that nobody
actually wanted or will use.

I'll continue to pursue the Core i9 as an option, just currently there
aren't many OEM builders providing these yet.

On Tue, Nov 7, 2017 at 1:00 PM, Jeff Muizelaar 
wrote:

> The Core i9s are a quite a bit cheaper than the Xeon Ws:
> https://ark.intel.com/products/series/125035/Intel-Xeon-Processor-W-Family
> vs
> https://ark.intel.com/products/126695
>
> I wouldn't want to trade ECC for 4 cores.
>
> -Jeff
>
> On Tue, Nov 7, 2017 at 3:51 PM, Sophana "Soap" Aik 
> wrote:
> > Kris has touched on the many advantages of having a standard model. From
> > what I am seeing with most people's use case scenario, only the GPU is
> what
> > will determine what the machine is used for. IE: VR Research team may
> end up
> > only needing a GPU upgrade.
> >
> > Fortunately the new W-Series Xeon's seem to be equal or better to the
> Core
> > i9's but with ECC support. So there's no sacrifice to performance in
> single
> > threaded or multi-threaded workloads.
> >
> > With all that said, we'll move forward with the evaluation machine and
> find
> > out for sure in real world testing. :)
> >
> >
> >
> > On Tue, Nov 7, 2017 at 12:30 PM, Kris Maglione 
> > wrote:
> >>
> >> On Tue, Nov 07, 2017 at 03:07:55PM -0500, Jeff Muizelaar wrote:
> >>>
> >>> On Mon, Nov 6, 2017 at 1:32 PM, Sophana "Soap" Aik 
> >>> wrote:
> 
>  Hi All,
> 
>  I'm in the middle of getting another evaluation machine with a 10-core
>  W-Series Xeon Processor (that is similar to the 7900X in terms of
> clock
>  speed and performance) but with ECC memory support.
> 
>  I'm trying to make sure this is a "one size fits all" machine as much
> as
>  possible.
> >>>
> >>>
> >>> What's the advantage of having a "one size fits all" machine? I
> >>> imagine there's quite a range of uses and preferences for these
> >>> machines. e.g some people are going to be spending more time waiting
> >>> for a single core and so would prefer a smaller core count and higher
> >>> clock, other people want a machine that's as wide as possible. Some
> >>> people would value performance over correctness and so would likely
> >>> not want ECC. etc. I've heard a number of horror stories of people
> >>> ending up with hardware that's not well suited to their tasks just
> >>> because that was the only hardware on the list.
> >>
> >>
> >> High core count Xeons will divert power from idle cores to increase the
> >> clock speed of saturated cores during mostly single-threaded workloads.
> >>
> >> The advantage of a one-size-fits-all machine is that it means more of us
> >> have the same hardware configuration, which means fewer of us running
> into
> >> independent issues, more of us being able to share software
> configurations
> >> that work well, easier purchasing and stocking of upgrades and
> accessories,
> >> ... I own a personal high-end Xeon workstation, and if every developer
> at
> >> the company had to go through the same teething and configuration
> troubles
> >> that I did while breaking it in, we would not be in a good place.
> >>
> >> And I don't really want to get into the weeds on ECC again, but the
> >> performance of load-reduced ECC is quite good, and the additional cost
> of
> >> ECC is very low compared to the cost of developer time over the two
> years
> >> that they're expected to use it.
> >
> >
> >
> >
> > --
> > moz://a
> > Sophana "Soap" Aik
> > IT Vendor Management Analyst
> > IRC/Slack: soap
>



-- 
moz://a
Sophana "Soap" Aik
IT Vendor Management Analyst
IRC/Slack: soap
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-08 Thread Sophana "Soap" Aik
Kris has touched on the many advantages of having a standard model. From
what I am seeing with most people's use case scenario, only the GPU is what
will determine what the machine is used for. IE: VR Research team may end
up only needing a GPU upgrade.

Fortunately the new W-Series Xeon's seem to be equal or better to the Core
i9's but with ECC support. So there's no sacrifice to performance in single
threaded or multi-threaded workloads.

With all that said, we'll move forward with the evaluation machine and find
out for sure in real world testing. :)



On Tue, Nov 7, 2017 at 12:30 PM, Kris Maglione 
wrote:

> On Tue, Nov 07, 2017 at 03:07:55PM -0500, Jeff Muizelaar wrote:
>
>> On Mon, Nov 6, 2017 at 1:32 PM, Sophana "Soap" Aik 
>> wrote:
>>
>>> Hi All,
>>>
>>> I'm in the middle of getting another evaluation machine with a 10-core
>>> W-Series Xeon Processor (that is similar to the 7900X in terms of clock
>>> speed and performance) but with ECC memory support.
>>>
>>> I'm trying to make sure this is a "one size fits all" machine as much as
>>> possible.
>>>
>>
>> What's the advantage of having a "one size fits all" machine? I
>> imagine there's quite a range of uses and preferences for these
>> machines. e.g some people are going to be spending more time waiting
>> for a single core and so would prefer a smaller core count and higher
>> clock, other people want a machine that's as wide as possible. Some
>> people would value performance over correctness and so would likely
>> not want ECC. etc. I've heard a number of horror stories of people
>> ending up with hardware that's not well suited to their tasks just
>> because that was the only hardware on the list.
>>
>
> High core count Xeons will divert power from idle cores to increase the
> clock speed of saturated cores during mostly single-threaded workloads.
>
> The advantage of a one-size-fits-all machine is that it means more of us
> have the same hardware configuration, which means fewer of us running into
> independent issues, more of us being able to share software configurations
> that work well, easier purchasing and stocking of upgrades and accessories,
> ... I own a personal high-end Xeon workstation, and if every developer at
> the company had to go through the same teething and configuration troubles
> that I did while breaking it in, we would not be in a good place.
>
> And I don't really want to get into the weeds on ECC again, but the
> performance of load-reduced ECC is quite good, and the additional cost of
> ECC is very low compared to the cost of developer time over the two years
> that they're expected to use it.
>



-- 
moz://a
Sophana "Soap" Aik
IT Vendor Management Analyst
IRC/Slack: soap
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-08 Thread Jean-Yves Avenard
With all this talk…

I’m eagerly waiting for the iMac Pro.

Best of all worlds really:
- High core count
- ECC RAM
- 5K 27” display
- Great graphic card
- Super silent…

I’ve been using a Mac Pro 2013 (the trash can one), Xeon E5 8 cores, 32 GB ECC 
RAM, connected to two 27” screens (one 5K with DPI set at 200%, the other a 
2560x1440 Apple thunderbolt)

It runs flawlessly Windows, Mac and Linux (though under Linux I never managed 
to get more than one screen working at a time).

It compiles on mac, even with stylo under 12 minutes and on Windows in 19 
minutes (used to be 6 minutes and 12 minutes respectively before all this rust 
thing came in)…. And that’s using mach with 14 jobs only so that I continue to 
work on the machine without noticing it’s doing a CPU intensive task. The UI 
stays ultra responsive.

And best of all, it’s sitting 60cm from my hear and I can’t hear anything at 
all…

This has been my primary machine since 2014, I’ve had no desire to upgrade as 
no other machine will allow me such comfortable development environment under 
all platforms we support.

It had been difficult to choose at the beginning between the higher frequency 6 
cores or the 8 cores. But that turned out to be a moot issue as the 8 cores, 
when only 6 cores are run will go as high as the 6 cores version…

The mac pro was an expensive machine, but seeing that it will last me longer 
than your usual machine, I do believe that in the long term it will be best 
value for money.

My $0.02

> On 8 Nov 2017, at 8:43 am, Henri Sivonen  wrote:
> 
> I agree that workstation GPUs should be avoided. Even if they were as
> well supported by Linux distro-provided Open Source drivers as
> consumer GPUs, it's at the very least more difficult to find
> information about what's true about them.
> 
> We don't need the GPU to be at max spec like we need the CPU to be.
> The GPU doesn't affect build times, and for running Firefox it seems
> more useful to see how it runs with a consumer GPU.
> 
> I think we also shouldn't overdo multi-monitor *connectors* at the
> expense of Linux-compatibility, especially considering that
> DisplayPort is supposed to support monitor chaining behind one port on
> the graphics card. The Quadro M2000 that caused trouble for me had
> *four* DisplayPort connectors. Considering the number of ports vs.
> Linux distros Just Working, I'd expect the prioritizing Linux distros
> Just Working to be more useful (as in letting developers write code
> instead of troubleshoot GPU issues) than having a "professional"
> number of connectors as the configuration offered to people who don't
> ask for a lot of connectors. (The specs for the older generation
> consumer-grade Radeon RX 460 claim 5 DisplayPort screens behind the
> one DisplayPort connector on the card, but I haven't verified it
> empirically, since I don't have that many screens to test with.)
> 
> On Tue, Nov 7, 2017 at 10:27 PM, Jeff Gilbert  > wrote:
>> Avoid workstation GPUs if you can. At best, they're just a more
>> expensive consumer GPU. At worst, they may sacrifice performance we
>> care about in their optimization for CAD and modelling workloads, in
>> addition to moving us further away from testing what our users use. We
>> have no need for workstation GPUs, so we should avoid them if we can.
>> 
>> On Mon, Nov 6, 2017 at 10:32 AM, Sophana "Soap" Aik  wrote:
>>> Hi All,
>>> 
>>> I'm in the middle of getting another evaluation machine with a 10-core
>>> W-Series Xeon Processor (that is similar to the 7900X in terms of clock
>>> speed and performance) but with ECC memory support.
>>> 
>>> I'm trying to make sure this is a "one size fits all" machine as much as
>>> possible.
>>> 
>>> Also there are some AMD Radeon workstation GPU's that look interesting to
>>> me. The one I was thinking to include was a Radeon Pro WX2100, 2GB, FH
>>> (5820T) so we can start testing that as well.
>>> 
>>> Stay tuned...
>>> 
>>> On Mon, Nov 6, 2017 at 12:46 AM, Henri Sivonen  wrote:
>>> 
 Thank you for including an AMD card among the ones to be tested.
 
 - -
 
 The Radeon RX 460 mentioned earlier in this thread arrived. There was
 again enough weirdness that I think it's worth sharing in case it
 saves time for someone else:
 
 Initially, for multiple rounds of booting with different cable
 configurations, the Lenovo UEFI consistenly displayed nothing if a
 cable with a powered-on screen was plugged into the DisplayPort
 connector on the RX 460. To see the boot password prompt or anything
 else displayed by the Lenovo UEFI, I needed to connect a screen to the
 DVI port and *not* have a powered-on screen connected to DisplayPort.
 However, Lenovo UEFI started displaying on a DisplayPort-connected
 screen (with or without DVI also connected) after one time I had had a
 powered-on screen 

Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-08 Thread Mike Hommey
On Wed, Nov 08, 2017 at 09:43:29AM +0200, Henri Sivonen wrote:
> I agree that workstation GPUs should be avoided. Even if they were as
> well supported by Linux distro-provided Open Source drivers as
> consumer GPUs, it's at the very least more difficult to find
> information about what's true about them.
> 
> We don't need the GPU to be at max spec like we need the CPU to be.
> The GPU doesn't affect build times, and for running Firefox it seems
> more useful to see how it runs with a consumer GPU.
> 
> I think we also shouldn't overdo multi-monitor *connectors* at the
> expense of Linux-compatibility, especially considering that
> DisplayPort is supposed to support monitor chaining behind one port on
> the graphics card. The Quadro M2000 that caused trouble for me had
> *four* DisplayPort connectors. Considering the number of ports vs.
> Linux distros Just Working, I'd expect the prioritizing Linux distros
> Just Working to be more useful (as in letting developers write code
> instead of troubleshoot GPU issues) than having a "professional"
> number of connectors as the configuration offered to people who don't
> ask for a lot of connectors. (The specs for the older generation
> consumer-grade Radeon RX 460 claim 5 DisplayPort screens behind the
> one DisplayPort connector on the card, but I haven't verified it
> empirically, since I don't have that many screens to test with.)

Yes, you can daisy-chain many monitors with DisplayPort, but there's a
bandwidth limit you need to be aware of.

DP 1.2 can only handle 4 HD screens at 60Hz, and *one* 4K screen at 60Hz
DP 1.3 and 1.4 can "only" handle two 4K screens at 60Hz.

Also, support for multi-screen over DP is usually flaky wrt hot-plug. At
least that's been my experience on both Linux and Windows, and I hear
Windows is actually worse. Also, I usually get my monitors set in a
different order when I upgrade the kernel. (And I'm only using two HD
monitors)

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform