Red Hat is abandoning X11 already, and Red Hat IS userspace Linux. Enjoy having no modern software available for your system, as toolkit and app developers remove their X code paths entirely. You want to stay on legacy, unsupported X11? Fine. Explaining it over and over again to the stubborn and ignorant is getting tiring. This has been explained time and time again by Drew, Daniel Stone, and others much more knowledgeable about this issue than I. Every single person who knows or cares about the modern Linux graphics stack is pretty much in agreement that abandoning the X approach and starting from scratch was the correct choice. Wayland is a fundamentally different design that cannot simply be embedded in the X protocol and besides which, again, nobody wants to touch the Xorg code base.Īgain. This is the kind of ignorance Drew was complaining about. > An alternative design would have been to have built Wayland inside X.org and tunnel the new protocol over the existing protocol, and then migrate chunks of functionality one at a time. I remember at least one engineer at Mozilla who was bitter about that, after putting in all the work to make Firefox use it only to have it be a net loss.) In any case, you can avoid the window blit by simply using scanout compositing, as detailed in my other reply, so there is really is no compelling reason to reinvent XRENDER. (A lot of the reason Chrome on Linux was faster than Firefox in the early days is that Firefox used XRENDER, while Chrome rendered on CPU. And if you use it you're at the mercy of the window server to implement it reasonably, which is not a safe assumption. For instance, browsers can't really use XRENDER nowadays because there's no way to describe CSS 3D transforms in it. But the tradeoff is that the XRENDER API is all you get, and usually apps have more sophisticated needs than what it can provide. The main reason would be to avoid a window blit on HiDPI displays. The bigger issue is that there's little reason to farm vector graphics rendering out to the window server in the first place. You would essentially need a complete rewrite. They are slower than just drawing on CPU. Honestly I think XRENDER could be a viable API-the core idea is similar to WebRender, which Firefox uses to great effect-but the existing implementations of it are not well-optimized implementations and issue tons of draw calls using obsolete OpenGL APIs. Though that is now a pretty old chip, and always had a bit of focus on "multimedia". And often the scanout pipes didn't hook into the (relatively large) system caches like the GPU did, so there were times it was again faster to composite the screen on the GPU to a single scanout buffer than flush already cached data, the get the scanout hardware to read it back from the memory bus.Īnd there weren't as cheap as people thought - one stat I remember was that the total area of the GPU on the omap4 platform was smaller than the display pipes. It didn't take many frames for that to be worth it.įor apps that were animating or otherwise updating it's window, most UI toolkits used the GPU for widget rendering. The scanout-time hardware was often less useful that you might think - only in dynamic scenes where the GPU is otherwise idle (like playing video possibly with a static UI overlay was the premier use case).įor static scenes it's more efficient to render out to a buffer (using the GPU as the scanout overlay pipes often had limited feedback capability) and just output that using overlays disabled. I used to work on mobile graphics and the android HWC stack. (Besides, the Linux ecosystem prefers open codecs, and hardware has only recently gotten support for non-patent-encumbered video formats.) However, the last time I looked at Linux video decoding (which was years ago), the drivers were awful and interfacing with each vendor's APIs was a huge pain, and so most apps just did video decoding on CPU. Video playback is a different beast and in theory relies on vendor-specific extensions to decode the video in hardware. Firefox and Chrome (I think SkiaGL is enabled on Linux?) are exceptions they use OpenGL and/or Vulkan to draw their UI. On the client side, I think most Linux apps still draw their UIs on CPU, usually accelerated with SIMD. For the Wayland compositor, I believe you'd use EGL_WL_bind_wayland_display to bind a Wayland surface to an EGLImage, and then glEGLImageTargetTexture2DOES (can't believe I have that function name memorized) to bind that EGLImage to an OpenGL texture, where it can be used in the same way. Classically, the X compositor would use the GLX_EXT_texture_from_pixmap extension to import an X pixmap representing a window surface into OpenGL, where it can be used like any other texture. 2D acceleration is generally done through the same APIs, specifically OpenGL and Vulkan.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |