@Araq, please correct me if I'm wrong: The main point of supporting 
event-based/retained/immediate modes of operation is to design the API in an 
open way to allow different ways of using the library for different use cases 
and needs. Instead of forcing a specific workflow such as opening a window and 
laying out toolboxes and panes, a good API would allow utilizing its various 
features in different fashions.

A game developer might need to draw an overlay pane that'll appear on a corner 
of the game window and they would want to utilize the library for scaling and 
laying out the elements and handling input in integration with the rest of the 
game that's completely outside the scope of the UI library. Another developer 
working on a business application might want to use the library to manage all 
its screens while still being able to control some areas to draw charts using a 
separate library.

So, I guess, the key word is "composable".

Another opinion I want to express is that all the UI 
libraries/frameworks/systems with all those widgets and component sets are 
things of the past. We need to look at the web from this perspective and try to 
understand what's really going on: There are of course many reasons behind 
web-based interfaces being so dominant. The deployment model (just click a link 
and it is there, nothing to download, install, manage) of course has no 
competition. It is also very easy to learn HTML/JS/CSS and I've known many "web 
developers" without any understanding of even the most basic concepts you would 
expect a developer to know (don't take this in the wrong way, building web 
applications _properly_ is not an easy thing at all).

But leave all these aside, and please look at Electron apps. Why? Why do people 
take their tiniest application with a few buttons and wrap it in a ginormous 
package that takes over hundreds of MBs of memory if not GBs? If you search for 
UI libraries, it is a completely different world. With a few stupid div and 
span elements, some CSS and a tiny bit JS, one can build the interface _they 
wanted_ with less code than you would need to create a GUI window and place a 
very ugly-looking label on it. Ugliness is not the actual problem, but being 
stuck with the ugliness is. In other words, the real value is in providing a 
way to define what a button is, how it is rendered, how it behaves. Widget 
collections etc are not what people building user interfaces really want. 
Nobody likes those buttons anymore. Go out and check 100 random web 
applications and you'll see more than 90 will have `button: appearance:none` in 
the CSS. As OP's first point and almost everyone here agrees, nobody likes 
"native look" and nobody wants their app to look like on a GUI libraries demo 
page.

I've been working on a library but it is at a very early experimental stage. I 
feel hesitant to talk about this for long with a shy fear of sounding like 
marketing, but I want to briefly share a few things about the architecture and 
approach so maybe others working on libraries find it useful somehow:

The API is mainly built around the idea of "element"s. Each element has three 
main attributes that library user can define:

  1. Content
  2. Renderer
  3. Observer



Content of an element depends on its type. The most basic type is "Box" which 
is simply a container. (like your <div> but hey, div is not a word, but box 
is!). Renderer is the implementation of how this "idea of a box" is to be 
rendered. Observer is simply a message hub with event dispatchers and listeners.

Another common box type is "Text". So, a button would be at least two boxes.

Now, I know this sounded too abstract and it seems as if drawing just a button 
will take so much work. That is not the case. Elements have default renderers, 
but library users can define their own. I had actually this idea of using the 
same implementation for building graphical, textual, and audio-only interfaces 
(accessibility matters to me) and it gave me a huge smile when I saw the OP's 
7th wish.

I checked Fidget and I see that it has the concept of letting the user design 
and implement completely custom interfaces, however, I'm not sure if depending 
on an application to define things is the way to go. The users of the library 
will be programmers, this is a library, not an application. They should be able 
to code. So I strongly support the OP's idea of having a DSL and/or Nim 
interface, not an application. The designer of the library can of course build 
an application to help the user ( = UI developer) with their work and that 
would be really cool, but such thing should never ever be something that a 
library relies on.

Good luck to elcritch and others working on this wish list. :) I hope that 
we'll have something great.

Reply via email to