Additive GUIs

+1  

Specify the attributes of the GUI and let the computer generate the GUI

YAML 想法

I have a idea whereby you specify the relationships of widgets on the screen and the computer generates the layout.

Rather than positioning widgets manually, the computer generates the layout. Essentially the widgets are an inequality formula where X and y is set relative to one another.

We say that one widget is left of another widget or another widget is below another. This is how you might describe practically any GUI

The idea is that the computer generates variations of the layout and the human reviews them.

You also define the data flow between widgets. BackedBy is how we set the data source for a widget. MappedTo is a reference to a template that defines the GUI for an item in a collection. It's the same as a functional map.

The system is configured in triples.

Have you ever heard of Todo MVC? https://todomvc.com/

It's a simple problem implemented in many frameworks. The problem is a to-do list.

This is a to-do app written in additive GUIs.

You should notice that it is extremely compact.

{

"predicates": [

    "NewTodo leftOf insertButton",

    "Todos below insertButton",

    "Todos backedBy todos",

    "Todos mappedTo todos",

    "Todos key .description",

    "Todos editable $item.description",

    "insertButton on:click insert-new-item",

    "insert-new-item 0.pushes {\"description\": \"$item.NewTodo.description\"}",

    "insert-new-item 0.pushTo $item.todos",

    "NewTodo backedBy NewTodo",

    "NewTodo mappedTo editBox",

    "NewTodo editable $item.description",

    "NewTodo key .description"
],

"widgets": {

    "todos": {

        "predicates": [

            "label hasContent .description"

        ]
    },
    "editBox": {

        "predicates": [

            "NewItemField hasContent .description"

        ]
    }
},
"data": {

    "NewTodo": {

        "description": "Hello world"

    },
    "todos": [
        {

            "description": "todo one"

        },
        {

            "description": "todo two"

        },
        {

            "description": "todo three"

        }
    ]
}

}

chronological,



(别通知) (可选) 请,登录

我想知道,OpenAI Codex 在被非正式地指示时,是否会像在 this 视频?

我认为更严格地定义 UI 会非常简单。我们可以提供如此紧凑的 UI 规范作为 API 响应,如果所有浏览器都预加载了某些库(例如,由某人制作了一个 nmp 预加载浏览器扩展),它可以非常快速地呈现 UI,而无需额外的 Web 请求,基本上,使前端开发变得不必要,并用此类声明性语句的标准化 API 视图取而代之。

I wonder, is the OpenAI Codex making an internal representation similar to your formalism of Additive GUIs, when being instructed informally, like in this video?

I think it would be great simplification for defining UIs more rigorously. We could just provide such compact UI specifications as an API response, and if all browsers just had the certain libraries preloaded (e.g., by someone making an nmp preloading browser extension), it could render the UI very fast, without extra web requests, basically, making the front-end development unnecessary, and replacing it by standardized API views of such declarative statements.


是 Mindey Additive GUIs 基于这样一个概念,即 GUI 是一个拼接多维多维平面的查询。其中每个维度都是一个小部件,点是该小部件的状态。有一个函数可以定义每个维度的点与每个维度的另一组点之间的关系,可能是通过人工交互或服务器交互。

如果 API 可以返回关于 GUI 应该如何工作和呈现的高度密集的定义,那么我们可以删除大量自定义代码。

大多数与面向数据的 GUI(如无穷大)的交互只是针对列表中的项目的动词。他们对数据收集做出反应。或将项目添加到集合。

对于绘制图形工具(如 PowerPoint 或 Paint)之类的图形用户界面,我认为您需要一个不同的模型。

Yes Mindey Additive GUIs is based on the concept that a GUI is a query that splices multiple dimensions multidimensional plane. Where each dimension is a widget and the points are states of that widget. There is a function that defines the relation of the points of each dimension to another set of points for each dimension, perhaps by human interaction or server interaction.

If APIs can return a highly dense definition of how the GUI should work and be rendered, then we can remove a lot of custom code.

Most interaction with data orientated GUIs like infinity are just verbs against items in lists. They are reactive in response to data collections. Or add items to collections.

For drawing GUIs like diagram tools like PowerPoint or paint I think you need a different model.



    :  -- 
    : Mindey
    :  -- 
    

chronological,

// 如果 API 可以返回关于 GUI 应该如何工作和呈现的高度密集的定义,那么我们可以删除大量自定义代码。

我知道了。为了简化问题,接下来的问题是完全定义这种声明性语言,然后通过(HTML,JS,CSS)三元组作为组件构建这种语言的规范和实现之间的映射,即定义响应式UI元素,无论是在纯 DOM 还是虚拟 DOM 中。然后,作为组件的每个三元组的状态空间对应于您定义的维度,整个 UI 的状态空间作为这些组件的 codomain 的笛卡尔积,整个 UI 的每个特定状态都是一个“多维平面”。

我看到这个概念很重要,但要让它真正发挥作用,我看到需要做很多工作来定义详尽的术语集,然后运行浏览器绑定的解释器。

// If APIs can return a highly dense definition of how the GUI should work and be rendered, then we can remove a lot of custom code.

I see. To simplify the matters, the problem then is fully-defining such declarative language, and then constructing the mapping between specification of such language and the implementation via the (HTML, JS, CSS) triplets as components, that is what defines reactive UI elements, regardless of whether its in pure or virtual DOM. The state space of each such triplet as component then corresponds to what you define as the dimension, and state space of entire UI as the Cartesian product of the codomains of those components, each particular state of entire UI being a "multidimensional plane".

I see this concept go, and being important, but to get it actually working, I see a lot of work being required to define the exhaustive set of terms, and then the browser-bound interpreter for this to run.


我已经编写了一个非常简单的解释器来处理这个例子。它使用反应。呈现的 HTML 很丑陋,但 Todo 添加有效。

困难的部分是你所说的,提供一种足够灵活的语言来支持大多数 GUI。

我的目标是让 IDE 可以用这种格式的 GUI 来表示。

我不敢看 IntelliJ 的代码我敢打赌它非常非常复杂。

I've written a very simple interpreter that works with the example. It uses React. The rendered HTML is ugly but Todo adding works.

The hard part is what you say, providing a flexible enough language that can support most GUIs.

My goal was for IDEs to be representable with this format of GUI.

I dread to look at the code for IntelliJ I bet it's very very very complicated.



    : Mindey
    :  -- 
    :  -- 
    

chronological,