Quantcast
Channel: Embedded – Qt Blog
Viewing all 111 articles
Browse latest View live

Multi-process embedded systems with Qt for Device Creation and Wayland

$
0
0

With the Qt 5.4 update for Qt for Device Creation it is now possible – on certain embedded systems – to run Qt applications on top of a Wayland compositor by relying only on the provided reference images without any additional modifications. While the stable and supported approach remains eglfs, the lightweight platform plugin that allows running fullscreen Qt applications on top of EGL and fbdev with the best possible performance, those who do not require any of the enhanced tooling but need to have multiple GUI applications running on the same screen can start experimenting with a Wayland-based system already today.

In this post we will take a look how this can be done on i.MX6 based systems, like for example the Sabre SD and BD-SL-i.MX6 boards.

The wayland platform plugin provided by the Qt Wayland module is now an official part of Qt 5.4.0. Whenever the necessary dependencies, like the wayland client libraries and the wayland-scanner utility, are available in the sysroot, the platform plugin will be built together with the rest of Qt. In the toolchains and the ready-to-be-flashed reference images for Sabre Lite and Sabre SD everything is in place already. They contain Wayland and Weston 1.4.0, based on Yocto’s recipes (daisy release).

We will use Weston as our compositor. This provides a desktop-like experience out of the box. Those looking for a more customized experience should look no further than the Qt Compositor libaries of the Qt Wayland module which provide the building blocks for easily creating custom compositors with Qt and QML. These components are still under development so stay tuned for more news regarding them in the future.

WP_20141201_012
Video playback and some other applications running on a Sabre SD board

Now let’s see what it takes to run a compositor and our Qt applications on top of it on an actual device. It is important to note that there is no tooling support for such a setup at the moment. This means that deploying and debugging from Qt Creator will likely not function as expected. Some functionality, like touch input and the Qt Virtual Keyboard, will function in a limited manner. For example, the virtual keyboard will appear on a per-application, per-window basis instead of being global to the entire screen. Support for such features will be improved in future releases. For the time being performance and stability may also not be on par with the standard single-process offering. On the positive side, advanced features like accelerated video playback and Qt WebEngine are already functional.

  • Qt Enterprise Embedded’s reference images will launch a Qt application upon boot. This is either /usr/bin/qtlauncher, containing various demos, or the user’s custom application deployed previously via Qt Creator. The launch and lifetime of these applications is managed by the appcontroller utility. To kill the currently running application, log in to the device via adb (adb shell) or ssh (ssh root@device_ip) and run appcontroller –stop.
  • Now we can launch a compositor. For now this will be Weston. The environment variable XDG_RUNTIME_DIR may not be set so we need to take care of that first: export XDG_RUNTIME_DIR=/var/run followed by weston –tty=1 –use-gal2d=1 &. The –use-gal2d=1 option makes Weston perform compositing via Vivante’s hardware compositing APIs and the GC320 composition core instead of drawing textured quads via OpenGL ES.
  • Once the “desktop” has appeared, we are ready to launch clients. The default Qt platform plugin is eglfs, this has to be changed either via the QT_QPA_PLATFORM environment variable or by passing -platform wayland to the applications. The former is better in our case because we can then continue to use appcontroller to launch our apps. Let’s run export QT_QPA_PLATFORM=wayland followed by appcontroller –launch qtlauncher. The –launch option disables some of the tooling support and will make sure a subsequent application launch via appcontroller will not terminate the previous application, as is the case with the default, eglfs-based, single GUI process system.
  • At this point a decorated window should appear on the screen, with the familiar demo application running inside. If the window frame and the title bar are not necessary, performance can be improved greatly by disabling such decorations altogether: just do export QT_WAYLAND_DISABLE_WINDOWDECORATION=1 before launching the application. Note that the window position can still be changed by connecting a keyboard and mouse, and dragging with the Windows/Command key held down.

To summarize:

appcontroller --stop
export XDG_RUNTIME_DIR=/var/run
weston --tty=1 --use-gal2d=1 &
export QT_QPA_PLATFORM=wayland
appcontroller --launch qtlauncher

and that’s it, we have successfully converted our device from a single GUI app per screen model to a desktop-like, multi-process environment. To see it all in action, check out the following video:

The post Multi-process embedded systems with Qt for Device Creation and Wayland appeared first on Qt Blog.


Qt Weekly #23: Qt 5.5 enhancements for Linux graphics and input stacks

$
0
0

The upcoming Qt 5.5 has received a number of improvements when it comes to running without a windowing system on Linux. While these target mainly Embedded Linux devices, they are also interesting for those wishing to run Qt applications on their desktop machines directly on the Linux console without X11 or Wayland.

We will now take a closer look at the new approach to supporting kernel mode setting and the direct rendering manager, as well as the recently introduced libinput support.

eglfs improvements

In previous versions there used be a kms platform plugin. This is still in place in Qt 5.5 but is not built by default anymore. As features accumulate, getting multiple platform plugins to function identically well gets more complicated. From Qt and the application’s point of view the kms and eglfs platforms are pretty much the same: they are both based on EGL and OpenGL ES 2.0. Supporting KMS/DRM is conceptually no different than providing any other device or vendor-specific eglfs backend (the so-called device hooks providing the glue between EGL and fbdev).

In order to achieve this in a maintainable way, the traditional static, compiled-in hooks approach had to be enhanced a bit. Those familiar with bringing Qt 5 up on embedded boards know this well: in the board-specific makespecs under qtbase/mkspecs/devices one comes across lines like the following:

  EGLFS_PLATFORM_HOOKS_SOURCES = $$PWD/qeglfshooks_imx6.cpp

This compiles the given file in to the eglfs platform plugin. This is good enough when building for a specific board, but is not going to cut it in environments where multiple backends are available and hardcoding any given one is not acceptable. Therefore an alternative, plugin-based approach has been introduced. When looking at the folder qtbase/plugins/egldeviceintegrations after building Qt 5.5, we find the following (assuming the necessary headers and libraries files were present while configuring and building):

  libqeglfs-kms-integration.so
  libqeglfs-x11-integration.so

These, as the names suggest are the eglfs backends for KMS/DRM and X11. The latter is positioned mainly as an internal, development-only solution, although it may also become useful on embedded boards like the Jetson TK1 where the EGL and OpenGL drivers are tied to X11. The former is more interesting for us now: it is the new KMS/DRM backend. And it will be selected and used automatically when no static hooks are specified in the makespecs and the application is not running under X. Alternatively, the plugin to be used can be explicitly specified by setting the QT_QPA_EGLFS_INTEGRATION environment variable to, for instance, eglfs_kms or eglfs_x11. Note that for the time being the board-specific hooks are kept in the old, compiled-in format and therefore there is not much need to worry about the new plugin-based system, unless KMS/DRM is desired. In the future however it is expected to gain more attention since newly introduced board adaptations are recommended to be provided as plugins.

libinput support

libinput is a library to handle input devices, providing device detection, pointer, keyboard and touch events, and additional functionality like pointer acceleration and proper touchpad handling. It is used by Weston, the reference Wayland compositor, and in the future potentially also in X.org.

Using libinput in place of the traditional evdevmouse|keyboard|touch input handlers of Qt 5 has a number of advantages. By using it Qt applications get the same behavior, configuration and calibration that other clients, for example Weston use. It also simplifies bringup scenarios since there will be no need to fight Qt’s input stack separately in case libinput is already proven to work.

On the downside, the number of dependencies are increased. libudev, libevdev, optionally libmtdev are all necessary in addition to libinput. Furthermore keyboard mapping is performed via xkbcommon. This is not a problem for desktop and many embedded distros, but can be an issue on handcrafted systems. Or on an Android baselayer. Therefore libinput support is optional and the evdev* handlers continue to be the default choice.

Let’s see it in action

How can all this be tested on an ordinary Linux PC? Easily, assuming KMS/DRM is usable (e.g. because it is using Mesa with working KMS and DRM support). Below is our application (a standard Qt example from qtbase/examples/opengl/qopenglwidget) running as an ordinary X11 client, using the xcb platform plugin, on a laptop with Intel integrated graphics:

Qt app with widgets and OpenGL on X11

Now, let’s switch to another virtual console and set the following before running the application:

  export QT_QPA_PLATFORM=eglfs
  export QT_QPA_GENERIC_PLUGINS=libinput
  export QT_QPA_EGLFS_DISABLE_INPUT=1

This means we will use the eglfs platform plugin, disabling its built-in keyboard, mouse and touchscreen support (that reads directly from the input devices instead of relying on an external library like libinput), and rely on libinput to get mouse, keyboard and touch events.

If everything goes well, the result is something like this:

Qt app with widgets and OpenGL on KMS/DRM

The application is running just fine, even though there is no windowing system here. Both OpenGL and the traditional QWidgets are functional. As an added bonus, even multiple top-level widgets are functional. This was not supported with the old kms platform plugin, whereas eglfs has basic composition capabilities to make this work. Keyboard and mouse input (in this particular case coming from a touchpad) work fine too.

Troubleshooting guide

This is all nice when it works. When it doesn’t, it’s time for some debugging. Below are some useful tips.

(1)
Before everything else, check if configure picked up all the necessary things. Look at qtbase/config.summary and verify that the following are present:

  libinput................ yes

  OpenGL / OpenVG: 
    EGL .................. yes
    OpenGL ............... yes (OpenGL ES 2.0+)

  pkg-config ............. yes 

  QPA backends: 
    EGLFS ................ yes
    KMS .................. yes

  udev ................... yes

  xkbcommon-evdev......... yes

If this is not the case, trouble can be expected since some features will be disabled due to failing configuration tests. These are most often caused by missing headers and libraries in the sysroot. Many of the new features rely on pkg-config so it is essential to get it properly configured too.

(2)
No output on the screen? No input from the mouse or keyboard? Enable verbose logging. Categorized logging is being taken into use in more and more areas of Qt. This includes also most of the input subsystem and eglfs. Some of the interesting categories are listed below:

  • qt.qpa.input – Enables debug output both from the evdev and libinput input handlers. Very useful to check if a given input device was correctly recognized and opened.
  • qt.qpa.eglfs.kms – Enables logging from the KMS/DRM backend of eglfs.
  • qt.qpa.egldeviceintegration – Enables plugin-related logging in eglfs.

Additionally, the legacy environment variable QT_QPA_EGLFS_DEBUG can also be set to 1 to get additional information printed, for example about the EGLConfig that is in use.

(3)
Check file permissions. /dev/fb0 and /dev/input/event* must be accessible by the application. Additionally, make sure no other application has a grab (as in EVIOCGRAB) on the input devices.

(4)
Q: I launched my application on the console without working keyboard input, I cannot exit and CTRL+C does not work!
A: Next time do export QT_QPA_ENABLE_TERMINAL_KEYBOARD=1 before launching the app. This is very handy for development purposes, until the initial issues with input are solved. The downside is that keystrokes go to the terminal, so this setting should be avoided afterwards.

The future and more information

While the final release of Qt 5.5 is still some months away, all the new features mentioned above are there in the dev branch of qtbase, ready to be tested by those who like bleeding edge stuff. The work is not all done, naturally. There is room for improvements, for example when it comes to supporting screens connected or disconnected during the application’s lifetime, or using alternative keyboard layouts. These will come gradually later on.

Finally, it is worth noting that the Embedded Linux documentation page, which has received huge improvements in the few recent major Qt releases, has been (and is still being) updated with information about the new graphics and input capabilities. Do not hesitate to check it out.

The post Qt Weekly #23: Qt 5.5 enhancements for Linux graphics and input stacks appeared first on Qt Blog.

Introducing the Qt Quick 2D Renderer

$
0
0

When Qt Quick 2 was introduced with the release of Qt 5.0, it came with a minimum requirement of either OpenGL 2.0 or OpenGL ES 2.0.  For desktop and mobile platforms this is usually not an issue, and when it is for example on Windows, it is now fairly easy to use an OpenGL software rasteriser as a fallback.  If however your target is an embedded device without a GPU capable of OpenGL ES 2.0, then software rasterisation of OpenGL is an unwise option.  It is typical that the embedded devices without a GPU have less CPU resources available as well, so the overhead introduced by the software rasterisation of OpenGL leads to unacceptable performance for even the most simple content.  Also many of the performance optimisations gained by using OpenGL for rendering Qt Quick 2 scenes are negated by software rasterisation.

So as a solution to our Professional and Enterprise customers we are now providing an alternative scene graph renderer called the Qt Quick 2D Renderer.  The Qt Quick 2D Renderer works by rendering the Qt Quick scene graph using Qt’s raster paint engine instead of using OpenGL.   Using the Qt Quick 2D Renderer is as simple as building the module and setting an environment variable:

export QMLSCENE_DEVICE=softwarecontext

Now instead of loading the default render which uses OpenGL, Qt Quick will load our renderer plugin instead.  This plugin makes it possible to run Qt Quick 2 applications with platform plugins without OpenGL capability, like LinuxFB.

But wait! Doesn’t the QtQuick module itself depend on OpenGL?

Unfortunately, the Qt Quick module cannot be built without Qt itself being configured with OpenGL support.  So even though most calls to OpenGL inside of Qt Quick module are now moved to the renderer, Qt Quick still has APIs that cannot be changed for the Qt 5 release series and depend on OpenGL.  Fortunately as long as you do not use those APIs, no OpenGL functions will be called.

So along with the Qt Quick 2D Renderer module we provide a set of dummy libraries and headers that will allow you to build Qt with OpenGL support, enabling you to build and use the QtQuick module.  However if you accidentally call any OpenGL functions, do not be surprised when your application crashes.

Limitations

So there are some downsides to not using OpenGL.  First and maybe most obvious is that any scene graph nodes that require the use of OpenGL are ignored.  Since the Qt Quick 2D Renderer is not actually rasterising the OpenGL content, but rather providing an alternative set of render commands to provide the same result, it is not possible to use any OpenGL.  Existing functionality in Qt Quick 2 that requires OpenGL to be present like ShaderEffects or Particles can not be rendered.  So in many cases your Qt Quick UI containing these elements will still run, but the portions of your UI depending on these Items will not be displayed.

The second limitation you can expect is a serious performance penalty.  When rendering with OpenGL and a GPU, you will get painting operations like translations basically for free. Without OpenGL however operations like rotating and scaling an item become expensive and should be avoided whenever possible.  We also cannot easily do neat tricks to determine what not to paint.  We have to fall back to the painter’s algorithm and paint everything visible in the scene from back to front.

Another thing to keep in mind is that partial updates of the UI are not supported.  That means that if something in the scene needs to be redrawn, everything in your Qt Quick window will be redrawn.  This is not likely to be changed, and is due to the primary use case of Qt Quick 2 being an OpenGL renderer. 

Hardware Acceleration

Even though the lack of OpenGL translates to some pretty big compromises regarding performance with Qt Quick 2, all hope is not lost.  Many devices still have hardware available to accelerate 2D graphics.  This hardware is typically capable of accelerating certain types of painting operations like copying pixmaps and filling rectangles.  The Qt Quick 2D Renderer is optimised to take full advantage of any 2D hardware acceleration that may be provided by a platform plugin.

For embedded Linux the DirectFB platform plugin can enable Qt to take advantage of 2D graphics acceleration hardware if available.  If you then use the Qt Quick 2D Renderer with the DirectFB plugin, the drawing of QQuickItems like Rectangle, Image, and BorderImage will be accelerated in many cases.  2D graphics hardware does have limitations to what transformations can be accelerated though, so keep in mind that if you set the Rotation on an Item you will not be able to take advantage of hardware acceleration.

Not Just Embedded

It is worth mentioning that while the Qt Quick 2D Renderer was developed with the “embedded devices without OpenGL” use case in mind, its use is not limited to embedded.  It is possible to test out the Qt Quick 2D Renderer on non-embedded platforms by using the same environment variable.  Keep in mind though that with the 5.4.0 release there are some rendering issues with screens that have a device pixel ratio greater than 1.0.  This should be resolved in the upcoming 5.4.1 release.

Who should use this?

For an embedded device project if the requirement is a fluid UI with 60 FPS animations like those seen in the average smartphone, then you absolutely need hardware that supports OpenGL ES 2.0.  If however you have existing hardware without a GPU capable of OpenGL ES 2.0 or just lesser expectations on lower cost hardware, then the Qt Quick 2D Renderer is the way to go when using Qt Quick 2.

Qt Quick 2D Renderer also provides the opportunity to share more code between the targets in your device portfolio.  For example if you are deploying to multiple devices that may or may not have OpenGL support, you can use the same Qt Quick 2 UI on all devices, even on the ones where previously you would either need to have a separate UI using QtWidgets or the legacy QtQuick1 module.

For desktop there are a few cases where it may make sense to use the Qt Quick 2D Renderer.  On Windows it can be used as an alternative to falling back to ANGLE or Mesa3D in the situation where neither OpenGL 2.0 nor Direct3D 9 or 11 are available.  It also makes it possible to run Qt Quick 2 applications via remote desktop solutions like VNC or X11 forwarding where normally the OpenGL support is insufficient.

Looking Forward

The Qt 5.4.0 release is just the start for the Qt Quick 2D Renderer.  Work is ongoing to improve the performance and quality to provide the maximum benefit to device creators writing Qt Quick 2 UIs for embedded devices without OpenGL.  One of the things that is being worked on now is enabling the use of QtWebEngine with the Qt Quick 2D Renderer which currently is unavailable because of a hard dependency on OpenGL.  Here is a preview of QtWebEngine running on a Colibri VF61 module from Toradex:

The post Introducing the Qt Quick 2D Renderer appeared first on Qt Blog.

Qt Weekly #27: WiFi networking support on embedded devices

$
0
0

As a part of the Qt for Device Creation offering we provide several
helpful add-on modules that assist in creating applications for embedded devices. In this blog post I would like to demonstrate some of the features supported by the B2Qt WiFi add-on module.

What is it?

The B2Qt WiFi module provides APIs for connecting embedded devices to a wireless network. There are C++ and QML APIs available for managing wireless network connectivity, scanning for wireless network access points, retrieving information from these access points, monitoring for network state changes and more. The API allows you to manage when and to which network to connect, and when to go offline.

Overview of the API

The module consists of 3 classes:

  • (Q)WifiConfiguration – Used to define a network configuration.
  • (Q)WifiDevice – Represents a physical device.
  • (Q)WifiManager – Main interface to the WiFi functionality (Singleton).

How to use it

Here is a code snippet in QML demonstrating how easy it is to connect a device to a wireless access point named “bus-station-wifi” that uses the WPA2 security protocol:

    import B2Qt.Wifi 1.0

    WifiConfiguration {
        id: localConfig
        ssid: "bus-station-wifi"
        passphrase: "mypassword"
        protocol: "WPA2"
    }

    Connections {
        target: WifiManager
        onBackendStateChanged: {
            if (WifiManager.backendState === WifiManager.Running)
                WifiManager.connect(localConfig)
        }
        onNetworkStateChanged: {
            if (WifiManager.networkState === WifiManager.Connected)
                print("successfully connected to: " + WifiManager.currentSSID)
        }
    }

    Component.onCompleted: {
        if (WifiManager.backendState === WifiManager.Running) {
            WifiManager.connect(localConfig)
        } else {
            // starts initialization of wifi backend
            WifiManager.start()
        }
    }

And the following code illustrates how to achieve the same in C++:


class WifiConnectionHandler : public QObject
{
    Q_OBJECT
public:
    WifiConnectionHandler()
    {
        m_config.setSsid("bus-station-wifi");
        m_config.setPassphrase("mypassword");
        m_config.setProtocol("WPA2");

        m_manager = QWifiManager::instance();
        connect(m_manager, &QWifiManager::backendStateChanged,
                this, &WifiConnectionHandler::handleBackendStateChanged);

        if (m_manager->backendState() == QWifiManager::Running) {
            m_manager->connect(&m_config);
        } else {
            m_manager->start();
        }
    }

protected slots:
    void handleBackendStateChanged(QWifiManager::BackendState state)
    {
        if (state == QWifiManager::Running)
            m_manager->connect(&m_config);
    }

private:
    QWifiManager *m_manager;
    QWifiConfiguration m_config;
};

The previous examples showed how to connect to a network for which the configuration is known beforehand. As mentioned earlier, the WiFi module can also be used to scan for available WiFi access points. In the next example we use the scan results and present them in a list where the user can select which network to use.

    Binding {
        target: WifiManager
        property: "scanning"
        value: WifiManager.backendState === WifiManager.Running
    }

    WifiConfiguration { id: config }
    ListView {
        anchors.fill: parent
        spacing: 8
        model: WifiManager.networks
        delegate: Row {
            spacing: 10
            Text { text: ssid }
            TextField { id: passwordInput }
            Button {
                text: "connect"
                onClicked: {
                    config.ssid = ssid;
                    config.passphrase = passwordInput.text
                    WifiManager.connect(config)
                }
            }
        }
    }

Here you can see a screenshot of a more advanced application running on a Nitrogen6_MAX board that utilizes WiFi API for wireless network setup.

wifi-settings

For more information check out the documentation.

The post Qt Weekly #27: WiFi networking support on embedded devices appeared first on Qt Blog.

Qt @ Embedded World 2015

$
0
0

Yesterday, we ended a great 3 days at Embedded World 2015 (Feb 24-26)  in Nuremberg, Germany. Considered the largest embedded development trade show, the event organizers estimated 30,000 exhibition visitors, over 1,500 congress participants and around 900 exhibitors from over 35 countries.

Personally, this is my 7th Embedded World with Qt and I have to say that this one has been our best one yet. I know, I say that every year, but the good thing is that we keep getting better every year; larger booth, more demos, more visitors, more companies sold on the fact that Qt is the best solution for embedded software development.

We showcased Qt’s strength in embedded software development with a combination of demos and customer projects focused on Qt in Automotive, Qt in Automation, Qt in IoT, all telling/showing the story of how Qt is used in a real-life reference project or product. Qt’s role in embedded software development was further told by our “Under the Hood” demos highlighting several specific features and functions such as:

  • Qt and OpenCV: Demonstrating how to easily & efficiently integrate computer vision or other computation or image processing algorithms with Qt Quick 2 and Qt Multimedia.
  • Qt Creator in Action – Debugging and profiling: Demonstrating the power of Qt Creator IDE for one-click embedded deployment, and on-device debugging and profiling, Qt Quick Designer
  • Qt UI capabilities, Qt WebEngine, Virtual Keyboard, multimedia capabilities, etc.
  • Qt for QNX: Qt 5 applications and UIs on QNX Neutrino
  • Qt for VxWorks: Qt 5 applications and UIs on VxWorks 7

This year, we had, like in the past 3 years, the live coding theater running daily on the hour from 10:00-16:00. We mixed it up this year with 3 different sessions: Introduction to Device Creation with Qt, Functionality Meets Productivity – Qt Developer Tools and Qt Features Overview. Thanks to our fearless and dedicated presenters. Well done!

I was the most happiest to have a large portion of our booth demos come from our Qt Partners. They had an abundance of industry-specific demos in automotive, automation and IoT. Qt’s story was made stronger with our partners demonstrating the power of Qt and its many uses in the embedded space. Thanks to e-Gits, basysKom, Froglogic, KDAB, Adeneo Embedded and ICS for being with us.

Without more hoopla, below are a few pictures from our booth and demos. If you came by our booth, thanks for visiting us! I would also like to say Thank You to The Qt Company staff onsite and again to our partners. Until next year.

 

 

Qt Embedded World 2015 Qt Embedded World 2015 Live coding theater - Qt Embedded World 2015 Qt Embedded World 2015 Qt in Medical & Qt in Marine Navigatiion Qt Machine Vision - OpenCV Nautical UI - Qt Partner KDAB Testing tool -Qt Partner Froglogic Automation - Qt Partner basysKom Automotive - Qt Partner - e-Gits Automotive - Qt Partner ICS Automotive - Qt Partner Adeneo Embedded Automotive - Qt on Magneti Marelli

The post Qt @ Embedded World 2015 appeared first on Qt Blog.

Qt Weekly #28: Qt and CUDA on the Jetson TK1

$
0
0

NVIDIA’s Jetson TK1 is a powerful development board based on the Tegra K1 chip. It comes with a GPU capable of OpenGL 4.4, OpenGL ES 3.1 and CUDA 6.5. From Qt’s perspective this is a somewhat unorthodox embedded device because its customized Linux system is based on Ubuntu 14.04 and runs the regular X11 environment. Therefore the approach that is typical for low and medium-end embedded hardware, running OpenGL-accelerated Qt apps directly on the framebuffer using the eglfs platform plugin, will not be suitable.

In addition, the ability to do hardware-accelerated computing using CUDA is very interesting, especially when it comes to interoperating with OpenGL. Let’s take a look at how CUDA code can be integrated with a Qt-based application.

Jetson TK1

The board

Building Qt

This board is powerful enough to build everything on its own without any cross-compilation. Configuring and building Qt is no different than in any desktop Linux environment. One option that needs special consideration however is -opengl es2 because Qt can be built either in a GLX + OpenGL or EGL + OpenGL ES configuration.

For example, the following configures Qt to use GLX and OpenGL:

configure -release -nomake examples -nomake tests

while adding -opengl es2 requests the usage of EGL and OpenGL ES:

configure -release -opengl es2 -nomake examples -nomake tests

If you are planning to run applications relying on modern, non-ES OpenGL features, or use CUDA, then go for the first. If you however have some existing code from the mobile or embedded world relying on EGL or OpenGL ES then it may be useful to go for #2.

The default platform plugin will be xcb, so running Qt apps without specifying the platform plugin will work just fine. This is the exact same plugin that is used on any ordinary X11-based Linux desktop system.

Vsync gotchas

Once the build is done, you will most likely run some OpenGL-based Qt apps. And then comes the first surprise: applications are not synchronized to the vertical refresh rate of the screen.

When running for instance the example from qtbase/examples/opengl/qopenglwindow, we expect a nice and smooth 60 FPS animation with the rendering thread throttled appropriately. This unfortunately isn’t the case. Unless the application is fullscreen. Therefore many apps will want to replace calls like show() or showMaximized() with showFullScreen(). This way the thread is throttled as expected.

A further surprise may come in QWidget-based applications when opening a popup or a dialog. Unfortunately this also disables synchronization, even though the main window still covers the entire screen. In general we can conclude that the standard embedded recommendation of sticking to a single fullscreen window is very valid for this board too, even when using xcb, although for completely different reasons.

CUDA

After installing CUDA, the first and in fact the only challenge is to tackle the integration of nvcc with our Qt projects.

Unsurprisingly, this has been tackled by others before. Building on this excellent article, the most basic integration in our .pro file could look like this:

... # QT, SOURCES, HEADERS, the usual stuff 

CUDA_SOURCES = cuda_stuff.cu

CUDA_DIR = /usr/local/cuda
CUDA_ARCH = sm_32 # as supported by the Tegra K1

INCLUDEPATH += $$CUDA_DIR/include
LIBS += -L $$CUDA_DIR/lib -lcudart -lcuda
osx: LIBS += -F/Library/Frameworks -framework CUDA

cuda.commands = $$CUDA_DIR/bin/nvcc -c -arch=$$CUDA_ARCH -o ${QMAKE_FILE_OUT} ${QMAKE_FILE_NAME}
cuda.dependency_type = TYPE_C
cuda.depend_command = $$CUDA_DIR/bin/nvcc -M ${QMAKE_FILE_NAME}
cuda.input = CUDA_SOURCES
cuda.output = ${QMAKE_FILE_BASE}_cuda.o
QMAKE_EXTRA_COMPILERS += cuda

In addition to Linux this will also work out of the box on OS X. Adapting it to Windows should be easy. For advanced features like reformatting nvcc’s error messages to be more of Creator’s liking, see the article mentioned above.

A QOpenGLWindow-based application that updates an image via CUDA on every frame could now look something like the following. The approach is the same regardless of the OpenGL enabler in use: QOpenGLWidget or a custom Qt Quick item would operate along the same principles: call cudaGLSetGLDevice when the OpenGL context is available, register the OpenGL resources to CUDA, and then do map – invoke CUDA kernel – unmap – draw on every frame.

Note that in this example we are using a single pixel buffer object. There are other ways to do interop, for example we could have registered the GL texture, got a CUDA array out of it and bound that either to a CUDA texture or surface.

...
// functions from cuda_stuff.cu
extern void CUDA_init();
extern void *CUDA_registerBuffer(GLuint buf);
extern void CUDA_unregisterBuffer(void *res);
extern void *CUDA_map(void *res);
extern void CUDA_unmap(void *res);
extern void CUDA_do_something(void *devPtr, int w, int h);

class Window : public QOpenGLWindow, protected QOpenGLFunctions
{
public:
    ...
    void initializeGL();
    void paintGL();

private:
    QSize m_imgSize;
    GLuint m_buf;
    GLuint m_texture;
    void *m_cudaBufHandle;
};

...

void Window::initializeGL()
{
    initializeOpenGLFunctions();
    
    CUDA_init();

    QImage img("some_image.png");
    m_imgSize = img.size();
    img = img.convertToFormat(QImage::Format_RGB32); // BGRA on little endian
    
    glGenBuffers(1, &m_buf);
    glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_buf);
    glBufferData(GL_PIXEL_UNPACK_BUFFER, m_imgSize.width() * m_imgSize.height() * 4, img.constBits(), GL_DYNAMIC_COPY);
    glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);

    m_cudaBufHandle = CUDA_registerBuffer(m_buf);

    glGenTextures(1, &m_texture);
    glBindTexture(GL_TEXTURE_2D, m_texture);

    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, m_imgSize.width(), m_imgSize.height(), 0, GL_BGRA, GL_UNSIGNED_BYTE, 0);

    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
}

void Window::paintGL()
{
    glClear(GL_COLOR_BUFFER_BIT);

    void *devPtr = CUDA_map(m_cudaBufHandle);
    CUDA_do_something(devPtr, m_imgSize.width(), m_imgSize.height());
    CUDA_unmap(m_cudaBufHandle);

    glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_buf);
    glBindTexture(GL_TEXTURE_2D, m_texture);
    // Fast path due to BGRA
    glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, m_imgSize.width(), m_imgSize.height(), GL_BGRA, GL_UNSIGNED_BYTE, 0);
    glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);

    ... // do something with the texture

    update(); // request the next frame
}
...

The corresponding cuda_stuff.cu:

#include <stdio.h>
#ifdef Q_OS_MAC
#include <OpenGL/gl.h>
#else
#include <GL/gl.h>
#endif
#include <cuda.h>
#include <cuda_gl_interop.h>

void CUDA_init()
{
    cudaDeviceProp prop;
    int dev;
    memset(&prop, 0, sizeof(cudaDeviceProp));
    prop.major = 3;
    prop.minor = 2;
    if (cudaChooseDevice(&dev, &prop) != cudaSuccess)
        puts("failed to choose device");
    if (cudaGLSetGLDevice(dev) != cudaSuccess)
        puts("failed to set gl device");
}

void *CUDA_registerBuffer(GLuint buf)
{
    cudaGraphicsResource *res = 0;
    if (cudaGraphicsGLRegisterBuffer(&res, buf, cudaGraphicsRegisterFlagsNone) != cudaSuccess)
        printf("Failed to register buffer %u\n", buf);
    return res;
}

void CUDA_unregisterBuffer(void *res)
{
    if (cudaGraphicsUnregisterResource((cudaGraphicsResource *) res) != cudaSuccess)
        puts("Failed to unregister resource for buffer");
}

void *CUDA_map(void *res)
{
    if (cudaGraphicsMapResources(1, (cudaGraphicsResource **) &res) != cudaSuccess) {
        puts("Failed to map resource");
        return 0;
    }
    void *devPtr = 0;
    size_t size;
    if (cudaGraphicsResourceGetMappedPointer(&devPtr, &size, (cudaGraphicsResource *) res) != cudaSuccess) {
        puts("Failed to get device pointer");
        return 0;
    }
    return devPtr;
}

void CUDA_unmap(void *res)
{
    if (cudaGraphicsUnmapResources(1,(cudaGraphicsResource **) &res) != cudaSuccess)
        puts("Failed to unmap resource");
}

__global__ void run(uchar4 *ptr)
{
    int x = threadIdx.x + blockIdx.x * blockDim.x;
    int y = threadIdx.y + blockIdx.y * blockDim.y;
    int offset = x + y * blockDim.x * gridDim.x;

    ...
}

void CUDA_do_something(void *devPtr, int w, int h)
{
    const int blockSize = 16; // 256 threads per block
    run<<<dim3(w / blockSize, h / blockSize), dim3(blockSize, blockSize)>>>((uchar4 *) devPtr);
}

This is all that’s needed to integrate the power of Qt, OpenGL and CUDA. Happy hacking!

The post Qt Weekly #28: Qt and CUDA on the Jetson TK1 appeared first on Qt Blog.

Qt Quick with the power of OpenCL on Embedded Linux devices

$
0
0

Many Qt users encounter a need to integrate GPU compute solutions into their Qt-based applications. What is more, with the advent of compute API implementations and powerful GPUs on embedded devices, using OpenCL or CUDA on an Embedded Linux device is a reality now. In a previous post we looked at NVIDIA’s Jetson TK1 board and discovered how easy it is to get started with CUDA development in Qt applications using OpenGL. When it comes to OpenCL, developers are not left out in the cold either, thanks to Hardkernel’s ODROID-XU3, where the ARM Mali-T628 graphics processor provides full OpenCL 1.1 support with CL-GL interop in addition to OpenGL ES 3.0.

In this post we will take a look at a simple and powerful approach to integrating Qt Quick applications and OpenCL. We will focus on use cases that involve sharing OpenGL resources like textures or buffers between OpenGL and OpenCL. The examples demonstrate three standard compute use cases and we will see them running on an actual ODROID board.

ODROID-XU3

ODROID-XU3

Why OpenCL and Qt?

The ability to perform complex, highly parallel computations on embedded devices while keeping as much data on the GPU as possible and to visualize the results with Qt Quick and touch-friendly Qt Quick Controls open the door for easily creating embedded systems performing advanced tasks in the domain of computer vision, robotics, image and signal processing, bioinformatics, and all sorts of heavyweight data crunching. As an example, think of gesture recognition: with high resolution webcams, Qt Multimedia, Qt Quick, Qt Quick Controls, and the little framework presented below, applications can focus on the things that matter: the algorithms (OpenCL kernels) performing the core of the work and the C++ counterpart that enqueues these kernels. The rest is taken care of by Qt.

Looking back: Qt OpenCL

OpenCL is not unknown to Qt – once upon a time, back in the Qt 4 days, there used to be a Qt OpenCL module, a research project developed in Brisbane. It used to contain a full 1:1 API wrapper for OpenCL 1.0 and 1.1, and some very helpful classes to get started with CL-GL interop.

Today, with the rapid evolution of the OpenCL API, the availability of an official C++ wrapper, and the upcoming tighter C++ integration approaches like SYCL, we believe there is little need for straightforward Qt-ish wrappers. Applications are encouraged to use the OpenCL C or C++ APIs as they see fit. However, when it comes to the helpers that simplify common tasks like choosing an OpenCL platform and device so that we get interoperability with OpenGL, they turn out to be really handy. Especially when writing cross-platform applications. Case in point: Qt Multimedia 5.5 ships with an OpenCL-based example as presented in the video filters introduction post. The OpenCL initialization boilerplate code in that example is unexpectedly huge. This shows that the need for modern, Qt 5 based equivalents of the old Qt OpenCL classes like QCLContextGL has not gone away. In fact, with the ubiquity of OpenCL and OpenGL on all kinds of devices and platforms, they are more desirable than ever.

Qt 5.5 on the ODROID-XU3

Qt 5.5 introduces support for the board in the device makespec linux-odroid-xu3-g++. Just pass -device odroid-xu3 to configure.

For example, to build release mode binaries with a toolchain borrowed from the Raspberry Pi, assuming a sysroot at ~/odroid/sysroot:

./configure -release -prefix /usr/local -extprefix ~/odroid/sysroot/usr/local -hostprefix ~/odroid/qt5-build -device odroid-xu3 -device-option CROSS_COMPILE=~/odroid/toolchain/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian-x64/bin/arm-linux-gnueabihf- -sysroot ~/odroid/sysroot -nomake examples -nomake tests -opengl es2

This will configure the Qt libraries and target tools like qmlscene to be deployed under /usr/local in the sysroot, while the host tools – like the x86 build of qmake that is to be used when building applications afterwards – get installed into ~/odroid/qt5-build.

When it comes to the platform plugins, both xcb and eglfs are usable, but only one at a time: the Mali graphics driver binary is different for X11 and fbdev, and has to be switched accordingly. The Ubuntu image from Hardkernel comes with X11 in place. While OpenGL is usable under X too, the usage of eglfs and the fbdev drivers is recommended, as usual.

For more information on the intricacies and a step by step guide to deploying Qt on top of the Hardkernel image, see this wiki page. If you have a Mali-based ARM Chromebook featuring a similar CPU-GPU combo, see here.

It is worth noting that thanks to Qt’s Android port, running a full Android system with Qt apps on top is also feasible on this board.

Time for some action

Now to the fun part. Below are three examples running on the framebuffer in full HD resolution with the fbdev Mali driver variant, Qt 5.5 and the eglfs platform plugin. All of them utilize OpenCL 1.1, CL-GL interop, and are regular Qt Quick 2 applications. They all utilize the little example framework which we call Qt Quick CL for now.

OpenGL texture to OpenGL texture via OpenCL

malicl_imageprocess2

An OpenCL-based alternative for ShaderEffect

First, let’s take a look at a standard image processing use case: we will execute one or more OpenCL kernels on our input, which can be a Qt Quick Image element, a (potentially invisible) sub-tree of the scene, or any texture provider, and generate a new texture. With CL-GL interop the data never leaves the GPU: no pixel data is copied between the CPU and the GPU. Those familiar with Qt Quick have likely realized already that this is in fact an OpenCL-based alternative to the built-in, GLSL-based ShaderEffect items.

By using the easy-to-use base classes to automatically and transparently manage OpenCL and CL-GL initialization, and to hide the struggles and gotchas of Qt Quick’s dedicated render thread and OpenGL contexts, the meat of the above application gets reduced to something like the following:

class CLRunnable : public QQuickCLImageRunnable
{
public:
    CLRunnable(QQuickCLItem *item)
        : QQuickCLImageRunnable(item)
    {
        m_clProgram = item->buildProgramFromFile(":/kernels.cl");
        m_clKernel = clCreateKernel(m_clProgram, "Emboss", 0);
    }
    ~CLRunnable() {
        clReleaseKernel(m_clKernel);
        clReleaseProgram(m_clProgram);
    }
    void runKernel(cl_mem inImage, cl_mem outImage, const QSize &size) Q_DECL_OVERRIDE {
        clSetKernelArg(m_clKernel, 0, sizeof(cl_mem), &inImage);
        clSetKernelArg(m_clKernel, 1, sizeof(cl_mem), &outImage);
        const size_t workSize[] = { size_t(size.width()), size_t(size.height()) };
        clEnqueueNDRangeKernel(commandQueue(), m_clKernel, 2, 0, workSize, 0, 0, 0, 0);
    }
private:
    cl_program m_clProgram;
    cl_kernel m_clKernel;
};

class CLItem : public QQuickCLItem
{
    Q_OBJECT
    Q_PROPERTY(QQuickItem *source READ source WRITE setSource)
public:
    CLItem() : m_source(0) { }
    QQuickCLRunnable *createCL() Q_DECL_OVERRIDE { return new CLRunnable(this); }
    QQuickItem *source() const { return m_source; }
    void setSource(QQuickItem *source) { m_source = source; update(); }
private:
    QQuickItem *m_source;
};

...
qmlRegisterType("quickcl.qt.io", 1, 0, "CLItem")
...

import quickcl.qt.io 1.0

Item {
    Item {
        id: src
        layer.enabled: true
        ...
    }
    CLItem {
        id: clItem
        source: src
        ...
    }
}

Needless to say, the application works on a wide variety of platforms. Windows, OS X, Android, and Linux are all good as long as OpenGL (ES) 2.0, OpenCL 1.1 and CL-GL interop are available. Getting started with OpenCL in Qt Quick applications won’t get simpler than this.

OpenGL texture to arbitrary data via OpenCL

CL-GL histogram

Histogram in Qt Quick directly on the GPU

And now something more complex: an image histogram. Histograms are popular with Qt, and the recent improvements in Qt Multimedia introduce the possibility of efficiently calculating live video frame histograms on the GPU.

In this example we take it to the next level: the input is an arbitrary live sub-tree of the Qt Quick scene, while the results of the calculation are visualized with a little Javascript and regular OpenGL-based Qt Quick elements. Those 256 bars on the right are nothing else but standard Rectangle elements. The input image never leaves the GPU, naturally. All this with a few lines of C++ and QML code.

OpenGL vertex buffer generation with OpenCL

VBO generation from OpenCL on the ODROID-XU3

VBO generation from OpenCL

Last but not least, something other than GL textures and CL image objects: buffers! The position of the vertices, that get visualized with GL by drawing points, are written to the vertex buffer using OpenCL. The data is then used from GL as-is, no readbacks and copies are necessary, unlike with Qt Quick’s own GL-based particle systems.

To make it all more exciting, the drawing happens inside a custom QQuickItem that functions similarly to QQuickFramebufferObject. This allows us to mix our CL-generated drawing with the rest of the scene, including Qt Quick Controls when necessary.

Looking forward: Qt Quick CL

QtQuickCL is a small research and demo framework for Qt 5 that enables easily creating Qt Quick items that execute OpenCL kernels and use OpenGL resources as their input or output. The functionality is intentionally minimal but powerful. All the CL-GL interop, including the selection of the correct CL platform and device, is taken care of by the module. The QQuickCLItem – QQuickCLRunnable split in the API ensures easy and safe CL and GL resource management even when Qt Quick’s threaded render loop is in use. Additional convenience is provided for the cases when the input, output or both are OpenGL textures, like for instance the first two of the three examples shown above.

The code, including the three examples shown above, is all available on Gerrit and code.qt.io as a qt-labs repository. The goal is not to provide a full-blown OpenCL framework or wrapper, but rather to serve as a useful example and reference for integrating Qt Quick and OpenCL, and to help getting started with OpenCL development. Happy hacking!

The post Qt Quick with the power of OpenCL on Embedded Linux devices appeared first on Qt Blog.

Integrating custom OpenGL rendering with Qt Quick via QQuickFramebufferObject

$
0
0

Integrating custom OpenGL rendering code, for example to show 3D models, with Qt Quick is a popular topic with Qt users. With the release of Qt 5.0 the standard approach was to use the beforeRendering and afterRendering signals to issue the custom OpenGL commands before or after the rendering of the rest of the scene, thus providing an over- or underlay type of setup.

With Qt 5.2 a new, additional approach has been introduced: QQuickFramebufferObject. This allows placing and transforming the custom OpenGL rendering like any other Quick item, providing the most flexible solution at the expense of rendering via OpenGL framebuffer objects. While it was little known at the time of its original release, it is becoming more and more popular in applications built with Qt 5.4 and 5.5. It is also one of the building blocks for Qt 3D, where the Scene3D element is in fact nothing but a QQuickFramebufferObject under the hood. We will now take a look at this powerful class, look at some examples and discuss the issues that tend to come up quite often when getting started.

First things first: why is this great?

textureinsgnode

textureinsgnode is the standard example of QQuickFramebufferObject. The Qt logo is rendered directly via OpenGL while having animated transformations applied to the QQuickFramebufferObject-derived item.

qquickviewcomparison

This QQuickWidget example also features QQuickFramebufferObject: the Qt logo is a real 3D mesh with a basic phong material rendered via a framebuffer object. The textured quad, rendered using the framebuffer’s associated texture, becomes a genuine item in the scene with other standard Qt Quick elements above and below it. It also demonstrates the usage of custom properties to control the camera.

The API

QQuickFramebufferObject is a QQuickItem-derived class that is meant to be subclassed further. That subclass is then registered to QML and used like any other standard Quick element.

However, having the OpenGL rendering moved to a separate thread, as Qt Quick does on many platforms, introduces certain restrictions on OpenGL resource management. To make this as painless as possible, the design of QQuickFramebufferObject follows the well-known split: subclasses are expected to reimplement a createRenderer() virtual function. This factory function is always called on the scenegraph’s render thread (which is either the main (GUI) thread, or a separate, dedicated one). The returned QQuickFramebufferObject::Renderer subclass belongs to this render thread. All its functions, including the constructor and destructor, are guaranteed to be invoked on the render thread with the OpenGL context bound and therefore it is always safe to create, destroy, or access OpenGL resources.

In practice this is no different than the QQuickItem – QSGNode or the QVideoFilter – QVideoFilterRunnable pattern.

The basic outline of the C++ code usually becomes something like the following:

class FbItemRenderer : public QQuickFramebufferObject::Renderer
{
    QOpenGLFramebufferObject *createFramebufferObject(const QSize &size)
    {
        QOpenGLFramebufferObjectFormat format;
        format.setAttachment(QOpenGLFramebufferObject::CombinedDepthStencil);
        // optionally enable multisampling by doing format.setSamples(4);
        return new QOpenGLFramebufferObject(size, format);
    }

    void render()
    {
        // Called with the FBO bound and the viewport set.
        ... // Issue OpenGL commands.
    }
    ...
}

class FbItem : public QQuickFramebufferObject
{
    QQuickFramebufferObject::Renderer *createRenderer() const
    {
        return new FbItemRenderer;
    }
    ...
}

int main(int argc, char **argv)
{
    ...
    qmlRegisterType("fbitem", 1, 0, "FbItem");
    ...
}

In the QML code we can import and take the custom item into use:

import fbitem 1.0

FbItem {
    anchors.fill: parent
    ... // transform, animate like any other Item
}

This is in fact all you need to get started and have your custom OpenGL rendering shown in a Qt Quick item.

Pitfalls: context and state

When getting started with QQuickFramebufferObject, the common issue is not getting anything rendered, or getting incomplete rendering. This can almost always be attributed to the fact that QQuickFramebufferObject::Renderer subclasses use the same OpenGL context as the Qt Quick scenegraph’s renderer. This is very efficient due to avoiding context switches, and prevents unexpected issues due to sharing and using resources from multiple contexts, but it comes at the cost of having to take care of resetting the OpenGL state the rendering code relies on.

In earlier Qt versions the documentation was not quite clear on this point. Fortunately in Qt 5.5 this is now corrected, and the documentation of render() includes the following notes:

Do not assume that the OpenGL state is all set to the defaults when this function is invoked, or that it is maintained between calls. Both the Qt Quick renderer and the custom rendering code uses the same OpenGL context. This means that the state might have been modified by Quick before invoking this function.

It is recommended to call QQuickWindow::resetOpenGLState() before returning. This resets OpenGL state used by the Qt Quick renderer and thus avoids interference from the state changes made by the rendering code in this function.

What does this mean in practice?

As an example, say you have some old OpenGL 1.x fixed pipeline code rendering a not-very-3D triangle. Even in this simplest possible case you may need to ensure that certain state is disabled: we must not have a shader program active and depth testing must be disabled. glDisable(GL_DEPTH_TEST); glUseProgram(0); does the job. For rendering that is more 3D oriented, a good example of a very common issue is forgetting to re-enable depth writes via glDepthMask(GL_TRUE).

Although not necessary in many cases, it is good practice to reset all the state the Qt Quick renderer potentially relies on when leaving render(). Since Qt 5.2 a helper function is available to do this: resetOpenGLState(). To access this member function, store a pointer to the corresponding item in the Renderer and do the following just before returning from render():

    m_item->window()->resetOpenGLState();

Note that this must be the last thing we do in render() because it also alters the framebuffer binding. Issuing further rendering commands afterwards will lead to unexpected results.

Custom properties and data synchronization

Accessing the Renderer object’s associated item looks easy. All we need is to store a pointer, right? Be aware however that this can be very dangerous: accessing data at arbitrary times from the Renderer subclass is not safe because the functions there execute on the scenegraph’s render thread while ordinary QML data (like the associated QQuickFramebufferObject subclass) live on the main (GUI) thread of the application.

As an example, say that we want to expose the rotation of some model in our 3D scene to QML. This is extremely handy since we can apply bindings and animations to it, utilizing the full power of QML.

class FbItem : public QQuickFramebufferObject
{
    Q_OBJECT
    Q_PROPERTY(QVector3D rotation READ rotation WRITE setRotation NOTIFY rotationChanged)

public:
    QVector3D rotation() const { return m_rotation; }
    void setRotation(const QVector3D &v);
    ...
private:
    QVector3D m_rotation;
};

So far so good. Now, how do we access this vector in the FbItemRenderer’s render() implementation? At first, simply using m_item->rotation() may look like the way to go. Unfortunately it is unsafe and therefore wrong.

Instead, we must maintain a copy of the relevant data in the renderer object. The synchronization must happen at a well-defined point, where we know that the main (GUI) thread will not access the data. The render() function is not suitable for this. Fortunately, QQuickFramebufferObject makes all this pretty easy with the Renderer’s synchronize() function:

class FbItemRenderer : public QQuickFramebufferObject::Renderer                                                         
{                                                                                                                       
public:                                                                                                                 
    void synchronize(QQuickFramebufferObject *item) {
        FbItem *fbitem = static_cast<FbItem *>(item);
        m_rotation = fbitem->rotation();
    }
    ...
private:
    QVector3D m_rotation; // render thread's copy of the data, this is what we will use in render()
}

When synchronize() is called, the main (GUI) thread is guaranteed to be blocked and therefore it cannot read or write the FbItem’s data. Accessing those on the render thread is thus safe.

Triggering updates

To schedule a new call to render(), applications have two options. When on the main (GUI) thread, or inside JavaScript code, use QQuickItem::update(). In addition, QQuickFramebufferObject::Renderer adds its own update() function, which is only usable from the render thread, for example inside render() to schedule a re-rendering of the FBO content on the next frame.

In practice the most common case is to schedule a refresh for the FBO contents when some property changes. For example, the setRotation() function from the above example could look like this:

void FbItem::setRotation(const QVector3D &v)
{
    if (m_rotation != v) {
        m_rotation = v;
        emit rotationChanged();
        update();
    }
}

Internals

Finally, it is worth noting that QQuickFramebufferObject is also prepared to handle multisampling and screen changes.

Whenever the QOpenGLFramebufferObject returned from createFramebufferObject() has multisampling enabled (the QOpenGLFramebufferObjectFormat has a sample count greater than 0 and multisampled framebuffers are supported by the OpenGL implementation in use), the samples will be resolved with a glBlitFramebuffer call into a non-multisampled FBO after each invocation of render(). This is done transparently to the applications so they do not have to worry about it.

Resizing the window with the Qt Quick content will lead to recreating the FBO because its size should change too. The application will see this as a repeated call to createFramebufferObject() with the new size. It is possible to opt out of this by setting the textureFollowsItemSize property to false.

However, there is a special case when the FBO has to get recreated regardless: when moving the window to a screen with a different device pixel ratio. For example moving a window between a retina and non-retina screen on OS X systems will inherently need a new, double or half sized, framebuffer, even when the window dimensions are the same in device independent units. Just like with ordinary resizes, Qt is prepared to handle this by requesting a new framebuffer object with a different size when necessary. One possible pitfall here is the applications’ caching of the results of the factory functions: avoid this. createFramebufferObject() and createRenderer() must never cache their return value. Just create a new instance and return it. Keep it simple. Qt takes care of managing the returned instance and destroying it when the time comes. Happy hacking!

The post Integrating custom OpenGL rendering with Qt Quick via QQuickFramebufferObject appeared first on Qt Blog.


Qt Virtual Keyboard 1.3 Released – Adding Japanese and Korean Language Support

$
0
0

Today, we have released a new version 1.3 of the Qt Virtual Keyboard. As new features, we are adding support for Japanese and Korean along with support for Windows desktop.

A big use case over the years for Qt has been in creating embedded devices with an interactive use interface. More and more embedded and industrial devices are moving towards touch screens as their primary interface. As external input devices are no longer used, the device creator often needs to leverage virtual keyboard for text input. Although a virtual keyboard may seem like a trivial UI component, it is actually quite complex. When you think about modern keyboards, input mechanisms, customization, scalability and especially requirements around internationalization & localization, a virtual keyboard can easily become a very complex challenge that could hinder the whole usability of your device if not made properly.

Focus of Qt is to shorten the time-to-market on all aspects of development workflow, so over the past years, we have developed the Qt Virtual Keyboard to provide a full ready-made solution for Qt powered devices. Besides just touch-based screens it also expands to other input mechanisms, like 2/5-navigation (joystick/scrollwheel). The Qt Virtual Keyboard is available with a commercial Enterprise license at no additional cost. Using the Qt Virtual Keyboard in your Qt based devices is easy and we are continuously making sure it works nicely with the new releases of Qt.

Key features of the Qt Virtual Keyboard include:

  • Customizable keyboard layouts and styles with dynamic switching
  • Predictive text input with word selection list support
  • Character preview and alternative character view
  • Automatic capitalization and space insertion
  • Scalability to different resolutions
  • Support for different character sets (Latin, Simplified Chinese, Hindi, Japanese, Arabic, Korean, …)
  • Support for most common input languages (see list below), with possibility to easily extend the language support
  • Left-to-right and right-to-left input
  • Hardware key support for 2-way and 5-way navigation
  • Sound feedback
  • Cross-platform functionality

The Qt Virtual Keyboard version 1.3 supports the following languages:

  • Arabic
  • Chinese (Simplified)
  • Danish
  • English
  • Finnish
  • French
  • German
  • Hindi
  • Italian
  • Japanese (Hiragana, Katakana, and Kanji)
  • Korean
  • Norwegian
  • Persian/Farsi
  • Polish
  • Portugese
  • Russian
  • Spanish
  • Swedish

If the language you need is not supported, you can add an additional language (see documentation for Adding New Keyboard Layouts).

To see more about Qt Virtual Keyboard, check out the documentation or watch the video about the core features of the Qt Virtual Keyboard:

The new Virtual Keyboard 1.3 is available for device creation customers who have a commercial Enterprise license via the online installer as well as from the Qt Account. If you are not a customer yet, you can evaluate Qt for Device Creation with our free 30 day trial.

The post Qt Virtual Keyboard 1.3 Released – Adding Japanese and Korean Language Support appeared first on Qt Blog.

Qt 5.5, computer vision, and the Nitrogen6X

$
0
0

In a previous post we have introduced one of the many new features of the upcoming Qt 5.5: the ability to easily integrate image processing and vision algorithms with camera and video streams via Qt Multimedia. It is now time to see the briefly mentioned OpenCV-based example in more detail.

Qt Quick, Multimedia, and OpenCV on the Nitrogen6X board

Qt Quick, Multimedia, and OpenCV on the Nitrogen6X board

Nitrogen6X is a single board computer from Boundary Devices based on the well-known Freescale i.MX6 platform. Combined with the 5 megapixel MIPI camera and the 7″ multi-touch display, all of which Qt supports out of the box, it provides an excellent platform for modern user interfaces and multimedia applications.

To see it all in its full glory, check out the following video:

As always, the application is cross-platform. Below it is seen running on an ordinary Linux PC, with QT_QUICK_CONTROLS_STYLE set to Flat. This allows having the exact same style for the controls as on the actual devices. This previously commercial-only style is now available under LGPLv3 for anyone, on any platform, starting from Qt 5.5. This is excellent news when targeting embedded devices, as those do not have a native look and feel to begin with, and for anyone who is after a consistent, unified experience across desktop, mobile, and embedded platforms.

Qt OpenCV demo on the desktop

Qt OpenCV demo on the desktop

The post Qt 5.5, computer vision, and the Nitrogen6X appeared first on Qt Blog.

Cross-platform OpenGL ES 3 apps with Qt 5.6

$
0
0

Now that the alpha release for Qt 5.6 is here, it is time to take a look at some of the new features. With the increasing availability of GPUs and drivers capable of OpenGL ES 3.0 and 3.1 in the mobile and embedded world, targeting the new features these APIs provide has become more appealing to applications. While nothing stops the developer from using these APIs directly (that is, #include <GLES3/gl3.h>, call the functions directly, and be done with it), the cross-platform development story used to be less than ideal, leading to compile and run-time checks scattered across the application. Come Qt 5.6, this will finally change.

Support for creating versioned OpenGL (ES) contexts has been available in Qt 5 for some time. Code snippets like the following should therefore present no surprises:

int main(int argc, char **argv) {
    QSurfaceFormat fmt;
    fmt.setVersion(3, 1);
    fmt.setDepthBufferSize(24);
    QSurfaceFormat::setDefaultFormat(fmt);

    QGuiApplication app(argc, argv);
    ...
}

It is worth pointing out that due to the backwards compatible nature of OpenGL ES 3 this may seem unnecessary with many drivers because requesting the default 2.0 will anyway result in a context for the highest supported OpenGL ES version. However, this behavior is not guaranteed by any specification (see for example EGL_KHR_create_context) and therefore it is best to set the desired version explicitly.

The problem

So far so good. Assuming we are running on an OpenGL ES system, we now have a context and everything ready to utilize all the goodness the API offers. Except that we have no way to easily invoke any of the 3.0 or 3.1 specific functions, unless the corresponding gl3.h or gl31.h header is included and Qt is either a -opengl es2 build or the application explicitly pulled in -lGLESv2 in its .pro file. In which case we can wave goodbye to our sources’ cross-platform, cross-OpenGL-OpenGL ES nature.

For OpenGL ES 2.0 the problem has been solved for a long time now by QOpenGLFunctions. It exposes the entire OpenGL ES 2.0 API and guarantees that the functions are resolved correctly everywhere where an OpenGL context compatible with either OpenGL ES 2.0 or OpenGL 2.0 plus the FBO extension is available.

Before moving on to introducing the counterpart for OpenGL ES 3.0 and 3.1, it is important to understand why the versioned OpenGL function wrappers (for example, QOpenGLFunctions_3_2_Core) are not a solution to our problem here. The versioned wrappers are great when targeting a given version and profile of the OpenGL API. However, they lock in the application to systems that support that exact OpenGL version and profile, or a version and profile compatible with it. Building and running the same source code on an OpenGL ES system is out of question, even when the code only uses calls that are available in OpenGL ES as well. So it turns out that strict enforcement of the compatibility rules for the entire OpenGL API is not always practical.

Say hello to QOpenGLExtraFunctions

To overcome all this, Qt 5.6 introduces QOpenGLExtraFunctions. Why “extra” functions? Because adding all this to QOpenGLFunctions would be wrong in the sense that everything in QOpenGLFunctions is guaranteed to be available (assuming the system meets Qt’s minimum OpenGL requirements), while these additional functions (the ES 3.0/3.1 API) may be dysfunctional in some cases, for example when running with a real OpenGL (ES) 2.0 context.

The usage is identical to QOpenGLFunctions: either query a context-specific instance from QOpenGLContext via QOpenGLContext::extraFunctions() or subclass and use protected inheritance. How the functions get resolved internally (direct call, dynamic resolving via dlsym/GetProcAddress, or resolving via the extension mechanism, i.e. eglGetProcAddress and friends) is completely transparent to the applications. As long as the context is OpenGL ES 3 or a version of OpenGL with the function in question available as an extension, it will all just work.

As an example let’s try to write an application that uses instanced drawing via glDrawArraysInstanced. We want it to run on mobile devices with OpenGL ES 3.0 and any desktop system where OpenGL 3.x (compatibility profile) is available. Needless to say, we want as little branching and variation in the code as possible.

For the impatient, the example is part of Qt and can be browsed online here.

Now let’s take a look at the most important pieces of code that allow the cross-OpenGL-OpenGL ES behavior.

Context versioning

int main(int argc, char *argv[])
{
    QSurfaceFormat fmt;
    fmt.setDepthBufferSize(24);
    if (QOpenGLContext::openGLModuleType() == QOpenGLContext::LibGL) {
        fmt.setVersion(3, 3);
        fmt.setProfile(QSurfaceFormat::CompatibilityProfile);
    } else {
        fmt.setVersion(3, 0);
    }
    QSurfaceFormat::setDefaultFormat(fmt);
    QGuiApplication app(argc, argv);
    ...
}

This looks familiar. Except that now, in addition to opting for version 3.0 with OpenGL ES, we request 3.3 compatibility when running with OpenGL. This is important because we know that instanced drawing is available there too. Therefore glDrawArraysInstanced will be available no matter what.

The fact that we are doing runtime checks instead of ifdefs is due to the dynamic OpenGL implementation loading on some platforms, for example Windows, introduced in Qt 5.4. There it is not necessarily known until runtime if the implementation provides OpenGL (opengl32.dll) or OpenGL ES (ANGLE).

Note that it may still be a good idea to check the actual version after the QOpenGLContext (or QOpenGLWidget, QQuickView, etc.) is initialized, just to be safe. QOpenGLContext::format(), once the context is sucessfully create()’ed, always contains the actual, not the requested, version and other information.

Astute readers may now point out that it should also be possible to request an OpenGL ES context unconditionally in case GLX_EXT_create_context_es2_profile or similar is supported. This would mean that instead of branching based on openGLModuleType(), one could simply set the format’s renderableType to QSurfaceFormat::OpenGLES. The disadvantage is obvious: that approach just won’t work on many systems. Hence we stick to compatibility profile contexts when running with OpenGL.

Shader version directive

    if (QOpenGLContext::currentContext()->isOpenGLES())
        versionedSrc.append(QByteArrayLiteral("#version 300 es\n"));
    else
        versionedSrc.append(QByteArrayLiteral("#version 330\n"));

What’s this? Our shader code is written in the modern GLSL syntax and is simple and compatible enough between GLSL and GLSL ES. However, there is a one line difference we have to take care of: the version directive. This is easy enough to fix up.

Note that with OpenGL implementations supporting GL_ARB_ES3_compatbility this is not needed as these should be able to handle 300 es shaders too. However, in order to target the widest possible range of systems, we avoid relying on that extension for now.

Calling ES 3 functions

    QOpenGLExtraFunctions *f = QOpenGLContext::currentContext()->extraFunctions();
    ...
    f->glDrawArraysInstanced(GL_TRIANGLES, 0, m_logo.vertexCount(), 32 * 36);

And here’s why this is great: the actual API call is the same between OpenGL and OpenGL ES, desktop and mobile or embedded platforms. No checks, conditions, or different wrapper objects are needed.

Below a screenshot of the application running on a Linux PC and on an Android tablet with no changes to the source code. (What’s better than a single Qt logo? A Qt logo composed of 1152 Qt logos of course!)

OpenGL ES 3 test app

The hellogles3 example running on Linux. Make sure to check it out live in all its animated glory as the photos do no justice to it.

GLES3 demo app on Android

The exact same app running on Android

Summary

To summarize which API wrapper to use and when, let’s go through the possible options:

  • QOpenGLFunctions – The number one choice, unless OpenGL 3/4.x features are desired and the world outside the traditional desktop platforms is not interesting to the application. Cross-platform applications intending to run on the widest possible range of systems are encouraged to to stick to this one, unless they are prepared to guard the usage of OpenGL features not in this class with appropriate runtime checks. QOpenGLFunctions is also what Qt Quick and various other parts of Qt use internally.
  • QOpenGLExtraFunctions – Use it in addition to QOpenGLFunctions whenever OpenGL ES 3.0 and 3.1 features are needed.
  • Versioned wrappers (QOpenGLFunctions_N_M_profile) – When an OpenGL 3/4.x core or compatibility profile is needed and targeting OpenGL ES based systems is not desired at all.

Bonus problem: the headers

Before going back to coding, an additional issue needs explaining: what about the GL_* constants and typedefs? Where do the OpenGL ES 3.x specific ones come from if the application does not explicitly include GLES3/gl3.h or gl31.h?

As a a general rule Qt applications do not include OpenGL headers themselves. qopengl.h, which is included by the QOpenGL class headers, takes care of this.

  • In -opengl es2 builds of Qt, which is typical on mobile and embedded systems, qopengl.h includes the header for the highest possible ES version that was found when running configure for Qt. So if the SDK (sysroot) came with GLES3/gl31.h, then applications will automatically have everything from gl31.h available.
  • In -opengl desktop and -opengl dynamic builds of Qt the ES 3.0 and 3.1 constants are available because they are either part of the system’s gl.h or come from Qt’s own internal copy of glext.h, where the latter conveniently gives us all constants up to OpenGL 4.5 and is included as well from qopengl.h.

So in many cases it will all just magically work. There is a problematic scenario, in particular on mobile, though: if only gl2.h was available when building Qt, then applications will not get gl3.h or gl31.h included automatically even if the SDK against which applications are built has those. This can be a problem on Android for example, when Qt is built against an older NDK that does not come with ES 3 headers. For the time being this can be worked around in the applications by explicitly including gl3.h or gl31.h guarded by an ifdef for Q_OS_ANDROID.

That is all for now. For those wishing to hear more about the exciting news in Qt’s graphics world, Qt World Summit is the place to be. See you there in October!

The post Cross-platform OpenGL ES 3 apps with Qt 5.6 appeared first on Qt Blog.

Using modern OpenGL ES features with QOpenGLFramebufferObject in Qt 5.6

$
0
0

QOpenGLFramebufferObject is a handy OpenGL helper class in Qt 5. It conveniently hides the differences between OpenGL and OpenGL ES, enables easy usage of packed depth-stencil attachments, multisample renderbuffers, and some more exotic formats like RGB10. As a follow up to our previous post about OpenGL ES 3 enhancements in Qt 5.6, we are now going to take a look at an old and a new feature of this class.

Qt 5.6 introduces support for multiple color attachments, to enable techniques requiring multiple render targets. Even though support for this is available since OpenGL 2.0, OpenGL ES was missing it up until version 3.0. Now that mobile and embedded devices with GLES 3.0 support are becoming widely available, it is time for QOpenGLFramebufferObject to follow suit and add some basic support for MRT.

Our other topic is multisample rendering to framebuffers. This has been available for some time in Qt 5, but was usually limited to desktop platforms as OpenGL ES does not have this as a core feature before version 3.0. We will now go through the basic usage and look at some common pitfalls.

Multisample framebuffers

By default our framebuffer will have a texture as its sole color attachment. QOpenGLFramebufferObject takes care of creating this texture, applications can then query it via textures() or can even take ownership of and detaching it from the FBO by calling takeTexture().

What if multisampling is desired? Requesting it is simple by using the appropriate constructor, typically like this:

    QOpenGLFramebufferObjectFormat format;
    format.setAttachment(QOpenGLFramebufferObject::CombinedDepthStencil);
    format.setSamples(4);
    fbo = new QOpenGLFramebufferObject(width, height, format);

Here 4 samples per pixel are requested, together with depth and stencil attachments, as the default is to have color only. The rest of the defaults in QOpenGLFramebufferObjectFormat matches the needs of most applications so there is rarely a need to change them.

Now, the obvious question:

What if support is not available?

When support for multisample framebuffers is not available, either because the application is running on an OpenGL ES 2.0 system or the necessary extensions are missing, QOpenGLFramebufferObject falls silently back to the non-multisample path. To check the actual number of samples, query the QOpenGLFramebufferObjectFormat via format() and check samples(). Once the framebuffer is created, the returned format contains the actual, not the requested, number of samples. This will typically be 0, 4, 8, etc. and may not match the requested value even when multisampling is supported since the requests are often rounded up to the next supported value.

    ...
    fbo = new QOpenGLFramebufferObject(width, height, format);
    if (fbo->isValid()) {
        if (fbo->format().samples() > 0) {
            // we got a framebuffer backed by a multisample renderbuffer
        } else {
            // we got a non-multisample framebuffer, backed by a texture
        }
    }

What happens under the hood?

QOpenGLFramebufferObject’s multisample support is based on GL_EXT_framebuffer_multisample and GL_EXT_framebuffer_blit. The good thing here is that glRenderbufferStorageMultisample and glBlitFramebuffer are available in both OpenGL 3.0 and OpenGL ES 3.0, meaning that relying on multisample framebuffers is now feasible in cross-platform applications and code bases targeting desktop, mobile, or even embedded systems.

As usual, had Qt not cared for older OpenGL and OpenGL ES versions, there would have been some other options available, like using multisample textures via GL_ARB_texture_multisample or GL_ARB_texture_storage_multisample instead of multisample renderbuffers. This would then require OpenGL 3.2 (OpenGL 4.3 or OpenGL ES 3.1 in case of the latter). It has some benefits, like avoiding the explicit blit call for resolving the samples. It may be added as an option to QOpenGLFramebufferObject in some future Qt release, let’s see.

A note for iOS: older iOS devices, supporting OpenGL ES 2.0 only, have multisample framebuffer support via GL_APPLE_framebuffer_multisample. However, API-wise this extension is not fully compatible with the combination of EXT_framebuffer_multisample+blit, so adding support for it is less appealing. Now that newer devices come with OpenGL ES 3.0 support, it is less relevant since the standard approach will work just fine.

So what can I do with that renderbuffer?

In most cases applications will either want a texture for the FBO’s color attachment or do a blit to some other framebuffer (or the default framebuffer associated with the current surface). In the multisample case there is no texture, so calling texture() or similar is futile. To get a texture we will need a non-multisample framebuffer to which the multisample one’s contents is blitted via glBlitFramebuffer.

Fortunately QOpenGLFramebufferObject provides some helpers for this too:

    static bool hasOpenGLFramebufferBlit();
    static void blitFramebuffer(QOpenGLFramebufferObject *target, const QRect &targetRect,
                                QOpenGLFramebufferObject *source, const QRect &sourceRect,
                                GLbitfield buffers,
                                GLenum filter,
                                int readColorAttachmentIndex,
                                int drawColorAttachmentIndex);

Combined with some helpful overloads, this makes resolving the samples pretty easy:

    ...
    GLuint texture;
    QScopedPointer tmp;
    if (fbo->format().samples() > 0) {
        tmp.reset(new QOpenGLFramebufferObject(fbo->size()));
        QOpenGLFramebufferObject::blitFramebuffer(tmp.data(), fbo);
        texture = tmp->texture();
    } else {
        texture = fbo->texture();
    }
    ...

Checking for the availability of glBlitFramebuffer via hasOpenGLFramebufferBlit() is not really necessary because the multisample extension requires the blit one, and, on top of that, QOpenGLFramebufferObject checks for the presence of both and will never take the multisample path without either of them. Therefore checking format().samples(), as shown above, is sufficient. It is worth noting that creating a new FBO (like tmp above) over and over again should be avoided in production code. Instead, create it once, together with the multisample one, and reuse it. Note also that having depth or stencil attachments for tmp is not necessary.

And now on to the shiny new stuff.

Multiple render targets

In Qt 5.6 QOpenGLFramebufferObject is no longer limited to a single texture or renderbuffer attached to the GL_COLOR_ATTACHMENT0 attachment point. The default behavior does not change, constructing an instance will behave like in earlier versions, with a single texture or renderbuffer. However, we now have a new function called addColorAttachment():

void addColorAttachment(const QSize &size, GLenum internalFormat = 0);

Its usage is pretty intuitive: as the name suggests, it will create a new texture or renderbuffer and attach it to the next color attachment point. For example, to create a framebuffer object with 3 color attachments in addition to depth and stencil:

fbo = new QOpenGLFramebufferObject(size, QOpenGLFramebufferObject::CombinedDepthStencil);
fbo->addColorAttachment(size);
fbo->addColorAttachment(size);

The sizes and internal formats can differ. The default value of 0 for internalFormat leads to choosing a commonly suitable default (like GL_RGBA8). As for the sizes, the main rule is to keep in mind that rendering will get limited to the area that fits all attachments.

After this we will have a color attachment at GL_COLOR_ATTACHMENT0, 1 and 2. To specify which ones are written to from the fragment shader, use glDrawBuffers. As introduced in the previous post, QOpenGLExtraFunctions is of great help for invoking OpenGL ES 3.x functions:

QOpenGLExtraFunctions *f = QOpenGLContext::currentContext()->extraFunctions();
GLenum bufs[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2 };
f->glDrawBuffers(3, bufs);

We are all set. All we need now is a fragment shader writing to the different attachments:

...
layout(location = 0) out vec4 diffuse;
layout(location = 1) out vec4 position;
layout(location = 2) out vec4 normals;
...

The rest is up to the applications. QOpenGLFramebufferObject no longer prevents them from doing deferred shading and lighting, or other MRT-based techniques.

To check at runtime if multiple render targets are supported, call hasOpenGLFeature() with MultipleRenderTargets. If the return value is false, calling addColorAttachment() is futile.

Finally, now that there can be more than one texture, some of the existing QOpenGLFramebufferObject functions get either a new overload or a slightly differently named alternative. They need no further explanation, I believe:

QVector textures() const;
GLuint takeTexture(int colorAttachmentIndex);
QImage toImage(bool flipped, int colorAttachmentIndex) const;

That’s all for now, happy hacking!

The post Using modern OpenGL ES features with QOpenGLFramebufferObject in Qt 5.6 appeared first on Qt Blog.

Handwriting Recognition with Qt Virtual Keyboard 2.0

$
0
0

The Qt Virtual Keyboard has been available for a bit over one year and nicely adopted in various devices built with Qt. With the upcoming version 2.0 it will allow gesture based text input as well as typing. In addition to handwriting recognition (HWR) functionality the upcoming Qt Virtual Keyboard 2.0 is bringing performance optimizations, support for Traditional Chinese language, and a couple of other features requested by customers.  

The Qt Virtual Keyboard is a versatile text input component available for commercial Qt licensees. It provides a customizable framework for text input using a virtual keyboard – and now also handwriting recognition. The Qt Virtual Keyboard makes it easy to extent the provided set of input methods and to use different 3rd party engines for word prediction and character recognition. The core features of the Qt Virtual Keyboard are covered in a recent blog post about version 1.3 as well as the documentation.

The most important new features of Qt Virtual Keyboard 2.0 are:

  • Generic support for shape recognition based input methods
  • Full screen HWR mode (on-top writing)
  • A reference implementation of handwriting input method with open-source Lipi toolkit alphabetic + numeric recognizer integration (English)
  • Performance optimizations for Lipi toolkit and background processing of recognition results
  • Nuance T9 Write HWR integration (commercial product with support for over 40 languages)
  • Word reselect functionality using open-source Hunspell prediction framework
  • Support for runtime language switching and language selection from a Qt application
  • Traditional Chinese support and keyboard layout

We are eager to hear feedback of the new features, especially the handwriting recognition. If you are visiting the Qt World Summit, please drop by and see it in action. There will also be a technology preview release available for download in the coming weeks.

The post Handwriting Recognition with Qt Virtual Keyboard 2.0 appeared first on Qt Blog.

Embedded Linux news in Qt 5.6

$
0
0

With Qt 5.6 approaching, it is time to look at some of the new exciting features for systems running an Embedded Linux variant, just like we did for Qt 5.5 a while ago.

Support for NVIDIA Jetson Pro boards

Boards like the Jetson TK1 Pro running a Yocto-generated Vibrante Linux image have support for X11, Wayland, and even running with plain EGL without a full windowing system. The latter is accomplished by doing modesetting via the well-known DRM/KMS path and combining it with the EGL device extensions. This is different than what the eglfs platform plugin’s existing KMS backend, relying on GBM, offers. Therefore Qt 5.5 is not functional in this environment. For more information on the details, check out this presentation.

Wayland presents a similar challenge: while Weston works fine due to having been patched by NVIDIA, Qt-based compositors built with the Qt Compositor framework cannot function out of the box since the compositor has to use the EGLStream APIs instead of Mesa’s traditional EGLImage-based path.

With Qt 5.6 this is all going to change. With the introduction of a new eglfs backed based on EGLDevice + EGLOutput + EGLStream, Qt applications will just work, similarly to other embedded boards:

eglfs on the Jetson K1 Pro

The well-known Qt Cinematic Experience demo running with Qt 5.6 and eglfs on a Jetson K1 Pro

Cross-compilation is facilitated by the new device makespec linux-jetson-tk1-pro-g++.

Wayland is going to be fully functional too, thanks to the patches that add support for EGL_KHR_stream, EGL_KHR_stream_cross_process_fd, and EGL_KHR_stream_consumer_gltexture in the existing wayland-egl backend of Qt Compositor.

Wayland on the Jetson with Qt

The qwindow-compositor example running on the Jetson with some Qt clients

All this is not the end of the story. There is room for future improvements, for example when it comes to supporting multiple outputs and direct rendering (i.e. skipping GL-based compositing and connecting the stream directly to an output layer à la eglfs to improve performance). These will be covered in future Qt releases.

Note that Wayland support on the Jetson should be treated as a technical preview for the time being. Compositors using the unofficial C++ APIs, like the qwindow-compositor example shown above, will work fine. However, QML and Qt Quick support is still work in progress at the time of writing.

Support for Intel NUC

Some of the Intel NUC devices make an excellent embedded platform too, thanks to the meta-intel and the included meta-nuc layers for Yocto. While these are ordinary x86-64 targets, they can be treated and used like ARM-based boards. When configuring Qt for cross-compilation, use the new linux-nuc-g++ device spec. Graphics-wise everything is expected to work like on an Intel GPU-based desktop system running Mesa. This includes both eglfs (using the DRM/KMS/GBM backend introduced in Qt 5.5) and Wayland.

Wayland on boards based on the i.MX6

Systems based on Freescale’s i.MX6 processors include a Vivante GC2000 GPU and driver support for Wayland. Qt applications have traditionally been working fine on the Weston reference compositor, see for example this previous post for Qt 5.4, but getting Qt-based compositors up and running is somewhat tricky due to some driver specifics that do not play well with QPA and eglfs. With Qt 5.6 this issue is eliminated as well: in addition to the regular Vivante-specific backend (eglfs_viv), eglfs now has an additional backend (eglfs_viv_wl) which transparently ensures proper functionality when running compositor applications built with the Qt Compositor framework. This backend will need to be requested explicitly, so for example to run the qwindow-compositor example, do QT_QPA_EGLFS_INTEGRATION=eglfs_viv_wl ./qwindow-compositor -platform eglfs (the -platform can likely be omitted since eglfs is typically the default).

OpenGL ES 3.0 and 3.1

As presented earlier, OpenGL ES 3 support is greatly enhanced in Qt 5.6. Using the new QOpenGLExtraFunctions class applications targeting embedded devices with GLES 3 capable drivers can now take the full API into use in a cross-platform manner.

libinput

Qt 5.5 introduced support for libinput when it comes to getting input events from keyboards, mice, touchpads, and touchscreens. Qt 5.6 takes this one step further: when libinput is available at build time, it will be set as the default choice in eglfs and linuxfb, replacing Qt’s own evdevkeyboard, mouse, and touch backends.

In some rare cases this will not be desirable (for example when using evdevkeyboard-specific keyboard layouts from the Qt 4 QWS times), and therefore the QT_QPA_EGLFS_NO_LIBINPUT environment variable is provided as a means to disable this and force the pre-5.6 behavior.

That’s it for now. Hope you will find the new Embedded Linux features useful. Happy hacking!

P.S. the Qt World Summit 2015 had a number of exciting talks regarding embedded development, for example Qt for Device Creation, Choosing the right Embedded Linux platform for your next project and many more. Browse the full session list here.

The post Embedded Linux news in Qt 5.6 appeared first on Qt Blog.

Qt Virtual Keyboard Updated with Handwriting Recognition

$
0
0

We are proud to present the new release of Qt Virtual Keyboard with Hand Writing Recognition (HWR), performance improvements, Nuance T9 Write integration, and support for Traditional Chinese language!

Qt Virtual Keyboard is now updated with new features and versioning scheme. As part of the recent licensing change announcement, the formerly commercial-only feature, the Qt Virtual Keyboard is now available also with GPLv3 license for open-source users, in addition to commercial Qt licensees. We released a technology preview of Qt Virtual Keyboard 2.0 a while back, and have now been improving it based on the feedback received. We have also adapted a new version numbering scheme: the Qt Virtual Keyboard now follows Qt versions. With the nupcoming Qt 5.6, Virtual Keyboard is still packaged only to the commercial installers, open source users need to get it from the repository. From Qt 5.7 onwards, Qt Virtual Keyboard is also included in the open source installers.

Qt Virtual Keyboard is a fully comprehensive out-of-the-box input solution. The most important new features of the Qt Virtual Keyboard include:

  • A reference implementation of handwrite input method with Lipi toolkit alphabetic + numeric recognizer integration (English)
  • Performance optimizations for Lipi toolkit
  • Accelerated processing of HWR recognition results
  • Full screen HWR mode (on-top writing)
  • Nuance T9 Write HWR integration
  • Word reselect for Hunspell word prediction
  • Support for runtime language switching (from the application)
  • Traditional Chinese keyboard layout

Check out the following video to see the latest version of the Qt Virtual Keyboard in action:

Full screen HWR mode (on-top writing)

In the technology preview, the HWR integration used only the regular keyboard layout as HWR input area. In the new fullscreen HWR mode, which can be used in addition, the whole screen is used as an input area instead. Fullscreen HWR mode can be activated from the keyboard by double tapping the full screen HWR button. When the fullscreen HWR mode is active the keyboard is hidden. Trace input is activated and inactivated from a floating button on screen. Fullscreen HWR mode is then deactivated by double tapping the floating button. The switching from writing to selection mode and back is done by single tapping the floating button.

Handwriting in fullscreen mode

Performance optimizations for Lipi toolkit

We are using Lipi toolkit as an open source alternative handwriting recognition engine. Based on the technology preview, we found out that it did not perform well enough on low-end hardware. One obvious reason for this is that the Lipi toolkit was not optimized to run on embedded devices. We conducted some analysis and the performance is now improved by code-level optimization. With that, we have improved the performance by a neat 10-40% in recognition and data model loading!

Accelerated processing of recognition results

The Qt Virtual Keyboard runs the HWR tasks in a separate background thread. This allows the UI thread to continue its operation while the HWR tasks are running. But the recognition results could be produced even faster by starting the recognition already while waiting for the timeout of the user input.

Nuance T9 Write HWR integration

Nuance T9 Write is a commercial HWR engine, which can be enabled at compile time, if the user has a suitable license from Nuance. It is much faster than the Lipi toolkit on embedded hardware. Nuance T9 Write engine is integrated to Qt Virtual Keyboard as an alternative recognition engine for HWR mode. The initial Nuance T9 Write integration supports Latin languages, and is implemented in such a way that it is easy to support also the non-Latin languages in future releases. The integration can also be used with the on-top writing mode. Current support contains x86 and ARM targets.

Word reselect for Hunspell

Added word reselect feature for Hunspell input method. The word reselect allows the user to activate predictions / spell corrections for an existing word by touching the word in the input field. Before it was not possible to reselect a word.

Support for runtime language switching from the application

Earlier versions only supported to switch the language by pressing the language button on the virtual keyboard but there was no way to change the language from the program side. Because Qt API does not provide a uniform mechanism to change the input language, the QInputMethod API provides the current input locale as read-only property. Now the virtual keyboard settings API is extended to include options for controlling input language.
  • New properties locale, availableLocales and activeLocales are added to settings
  • The locale(if defined) overrides the system default locale in startup
  • The locale property can also change the runtime language
  • The availableLocales property is read-only and provides the list of “installed” locales
  • The activeLocales is application defined subset of availableLocales specifying the locales which can be activated runtime. If the list is empty, then all the available locales are active

Traditional Chinese

Added support for Traditional Chinese / Cangjie input method. The input method implementation is ported from Apache 2.0 licensed 3rd party library.
The Qt Virtual Keyboard supports 3 different Chinese input methods:
  • Pinyin(Chinese simplified)
  • Cangjie
  • Zhuyin

The type of available input methods is configured at compile time.

Cangjie keyboard layout Pinyin keyboard layout Zhuyin keyboard layout
Get Qt Virtual Keyboard
Qt Virtual Keyboard is included in the commercial Qt 5.6 packages, Qt 5.6.0 final being released later in March. It has now also been contributed to open source under GPLv3 license by The Qt Company, and will be part of the Qt 5.7 release packages. If you are an open source user, please check the code from the repository. If you already have a commercial license, you can choose to install the new Qt Virtual Keyboard with handwriting support in conjunction with Qt 5.6.

The post Qt Virtual Keyboard Updated with Handwriting Recognition appeared first on Qt Blog.


Creating Digital Instrument Clusters with Qt

$
0
0

Qt is widely used in automotive infotainment systems with a number of OS and platform configurations. Some of the car manufacturers have already introduced Qt also in their digital instrument clusters. We believe there will many more in the coming years. Therefore, we have started focused research and development during last year to make cluster creation with Qt more efficient, and recently presented the first generation demonstrator at Embedded World 2016 in Nürnberg, Germany.

In the automotive industry there is a strong trend to create instrument clusters using digital graphics rather than the traditional analog gauges. Unlike in the first digital clusters in the 70’s using 7-segment displays to indicate speed, today’s cars typically show a digital representation of the analog speedometer along with a wide array of other information, such as tachometer, navigation, vehicle information and infotainment content. The benefits compared to analog gauges are obvious, for example it is possible to adapt the displayed items according to the driver’s needs in different situations, as well as easily create regional variants or adapt style of the instrument cluster to the car model.

The cluster demonstrator we created is running Qt for Device Creation version 5.6 on NXP’s widely used i.MX6 CPU. To show the possibilities Qt brings, we are actively leveraging Qt functionality such as Qt Location for mapping data and GPS coordinates, Qt Multimedia for reverse camera view, Qt SerialBus for transfer of vehicle data via CanBUS, and Qt 3D for visualization of the vehicle model in the diagnostics view. The whole UI is built with Qt Quick, and the logic is created with C++ using the versatile Qt APIs.

The following video presents the cluster demonstrator as shown in the Qt stand at the Embedded World event:

Main display of the cluster is a 12,3” HSXGA (1280×480) screen and the second screen used in the demonstrator is a touch panel for simulating events such as turn indicator, putting gear to reverse, tire pressure dropping, a door being open, etc. The controller sends information via the CanBUS to the cluster, which then reacts to the events accordingly.

The demonstrator is running on embedded Linux, using the Qt for Device Creation as baseline. In addition to embedded Linux, Qt supports many real-time operating systems, which are commonly used in the digital instrument clusters. Using a real-time operating system makes it easier to achieve a functional safety certification for the system, as some real-time operating systems are already certified according to the needed functional safety standard requirements.

Our research and development efforts continue with a goal to make it straightforward to build leading digital instrument clusters with Qt. We believe that Qt is a good choice for building both the infotainment system as well as the cluster, and that it is beneficial to use the same technology in both these screens. Stay tuned for more, or contact us to discuss how Qt can be used in automotive and other industries already today.

The post Creating Digital Instrument Clusters with Qt appeared first on Qt Blog.

Qt for Device Creation on Windows Host

$
0
0

One of the new and a quite remarkable items we’re introducing with Qt 5.6 is the possibility to use Windows host computer for embedded Linux development and deployment. The aim of the feature is to make embedded development workflow as seamless as possible regardless of the host OS. So now Windows users can use their machine to host their embedded development, too.

For an overview and a demo of the feature, check out this small video clip from Embedded World:

 

The workflow is based on our pre-built software stack, Boot to Qt, which is part of our Qt for Device Creation offering. Boot to Qt is a lightweight, Qt-optimized Linux software stack that fully utilizes the Yocto Project tooling. This gives you an immediate, pre-configured kick-start to embedded development with many common development boards. It also provides a good basis for customizing the stack for your own production needs and boards. For an overview of whole Qt for Device Creation offering and the workflow with Boot to Qt, take a look at this video, recorded under Linux host, but now with the workflow also being available for Windows host:

More information about the workflow with Boot to Qt can be found here or directly from the documentation.

Here in this post, I will also give you instructions on how to get started with development on a Windows host computer. To successfully set up Qt for Device Creation on Windows, you must closely follow the instructions in this section.

Setting up Qt for Device Creation on Windows

1. Install adb from the Android SDK Tools

Qt Creator is using the Android Build Bridge (adb) for the target device deployment. You can install adb as a part of the Android SDK Tools Package, which can be downloaded from the Android developer site. Make sure to select Tools > Android SDK Platform-tools and Extras > Google USB Driver in the Android SDK manager (everything else is not needed and can be disabled).

2. Installing VirtualBox for the Emulator

In addition to working directly on real hardware, Qt for Device Creation comes with a full emulator. The emulator relies on VirtualBox virtualization software which you need to install separately. You can download it from here.

After installation, start the VirtualBox user interface and select File > Preferences > Network to open VirtualBox network settings.

Edit the properties of the first host-only network (If you do not have an host-only network create one):

  • Change the IPv4 address to 192.168.56.1 and the IPv4 network mask to 255.255.255.0.
  • In the DHCP Server tab, select the Enable Server check box.
  • Change the server address to 192.168.56.1.
  • Change both the lower and upper address bounds to 192.168.56.101.
  • If a firewall is enabled on the development host, it needs to allow TCP and UDP packets between your host and the virtual machine.

3. Installing Qt for Device Creation

Download the Qt for Device Creation installer, either from your Qt Account web portal (if you already are an existing Qt licensee) or request for a free 30-day evaluation. The installer will let you select a directory where Qt for Device Creation will be installed.

In this post the installation directory is referred to as <INSTALL_DIR>. The default installation directory is C:\Qt.

4. Install Boot to Qt on Target Devices

Boot to Qt comes pre-built for a variety of common development boards. If you target only the emulator you can skip this step.

Before you are able to deploy and test your Qt application on hardware, you must flash the target device with an image that contains the Boot to Qt stack. On windows you can create bootable drives with the Boot to Qt Flashing Wizard. It can be found under <INSTALL_DIR>\Tools\b2qt\b2qt-flashing-wizard.exe or in Qt Creator under Tools > Flash Boot to Qt device.

After flashing the device it contains not only the stack but also by default launches into the demo launcher which you can use to explore all kinds of Qt demos for embedded.

The next step is setting up the connection between the host computer and the device for developing and deploying your own applications.

5. Setting up USB Access to Embedded Devices

Again, if you are targeting only the emulator, you can skip this step.

You can confirm that the connection is working by running the following cmd command:
<android sdk install dir>/platform-tools/adb.exe devices -l

The output should be a list of connected Boot to Qt (and Android) devices, identified with a serial number and a name. If your device is missing from the list, or the serial number is displayed as “??????”, the connection is not working properly. Check that the device is powered on, and disconnect and reconnect the USB cable.

Additionally on Windows, you may need to install or update the Android Device driver.  You can check whether a driver is already installed when a device is attached via the Device Manager. If you haven’t installed any driver there should be an USB Function Filesystem under Other devices. If this is the case you have to install the USB driver by the following steps:

  • Open Other devices > USB Function Filesystem.
  • Switch to the Driver tab and click Update Driver.
  • Do not let windows search automatically for an updated driver, but select “Browse My Computer for driver software”.
  • Select “Let me pick from a list of device drivers on my computer”.
  • Open “Have Disk…”
  • Install the driver that is located at <Android-SDK-Tools-install-dir>\extras\google\usb_driver\android_winusb.inf

If you already have an Android ADB Interface under Android Device and can not discover any devices you may need to update the driver. This can be achieved by running the previous steps on the Android Device > Android ADB Interface.

The emulator may be listed as well. Its serial number is its IP and the port number: 192.168.56.101:5555.

6. Configuring a Device Kit in Qt Creator

After you have prepared the hardware, you must perform one final step to set up the development tools in Qt Creator for your device. That is, you must configure the correct device to be used for each build and run kit. Connect your device to the development host via USB and launch Qt Creator. In Qt Creator:

  • Select Tools > Options > Build & Run > Kits.
  • Select one of the predefined kits starting with Boot to Qt… that matches the type of your device.
  • Select the correct device in the Device field.
  • Select OK.

You are now ready to start developing for your device. For more information, see Building and Running an Example.

KitOptionsPage_DeviceHighlight

Create your first example

There you go, everything is set up. You are now ready to write your first small application!

From Qt Creator, create a new “Qt Quick Application (Boot to Qt)” with the ‘New File or Project’ dialog and select the kits that should be used to build the project. When you have finished the wizard select the current platform via the kit selector and everything is in place to build your first project.

KitSelector

When the project is built it can be directly executed. The application gets automatically deployed to the device before the execution. The project could also be manually deployed via Build > Deploy Project <projectname>.

Let me know what you think!

The post Qt for Device Creation on Windows Host appeared first on Qt Blog.

Three Steps to a Successful Embedded Product

$
0
0

Developing an embedded product requires three main steps: selection of the proper hardware platform, operating system selection/setup, and User Interface (UI) development.  All three of these steps are very closely tied together and have a significant impact on time-to-market, project costs, and the overall quality of the product.  It is critical to select the proper hardware platform that also has excellent operating system support as well as the proper UI tools. This post dives into each of these steps and how it will impact the overall project.

1. Choose Your Hardware Platform

Selecting the proper hardware platform is the first step in the embedded product process and can have a substantial effect on the overall costs of the project in terms of unit cost as well as development time. Some important questions to ask in the selection process are:

  • Is there a stable supply chain/guaranteed longevity/availability? If the hardware that is selected is no longer available in 6 months, all of the development time will be wasted causing substantial delays in the project and significant cost.
  • Is it a high quality product? Many designs are not done with best practices in mind which can cause severe headaches in the future due to device failures. Having to recall products tarnishes the brand and has a very high cost associated with replacement.
  • Is there flexibility to provide easy upgrade paths that reduce development time/cost for future revisions?

A variety of integration options are available including Single Board Computers (SBC), System-on-Modules (SOM) and custom designs.

The SBC option is typically a fully completed design that requires a display and power supply. SBCs can be pre-FCC certified meaning that users do not need to worry about certification issues. Some SBC options can be cost-reduced by de-populating components that are not required in production.

Nitrogen6_MAX: High-end Embedded single board computer based on the NXP/Freescale i.MX6 Quad Processor. Kit includes 5V Power Supply, 4GB microSD card with Linux OS, and Serial Console Cable.

The System-on-Module provides flexibility for those who have specific IP or circuitry that they would like to include on their carrier board. The SOM contains the CPU, memory, flash, and power supplies. The rest of the circuitry is designed into the carrier board.

The final hardware option is to hire a specialist company to design a semi-custom board to meet the exact specifications of the project. In production, these solutions will be the lowest cost option because they are designed for exactly what the project requires.

2. Select a Trusted Operating System

Having a stable, high quality operating system reduces development time and helps to get to market quicker. A poor quality operating system can waste software resources due to time spent fixing bugs and also creates potential product reliability issues in the future. Selecting from industry leading operating system options such as Android 5.0, Yocto, Ubuntu, QNX, CE7/CE2013, and Buildroot will reduce the risk of these issues.

3. Create Your UI

The third and final piece of the puzzle is the user interface. One of the most complex and time-consuming elements of an embedded project is the UI development.  By utilizing a known, tested UI development tool, embedded products statistically get completed faster, which accelerates time-to-market and reduces development costs.

On hardware (such as that from Boundary Devices) that supports Qt for Device Creation, it is possible to download the IDE and start developing within minutes.

To sum it all up…

When embarking on a new embedded project, take care to select proven hardware platforms and operating systems and make sure that your partners have the experience and capabilities to see you through the entire project. Having a close association with Qt will almost certainly give them (and you) a head start. As a Qt Technology Partner, Boundary Devices can offer you exactly that.

You can contact me here:  pejman@boundarydevices.com

About the Guest Blogger: Pejman is a partner at Boundary Devices and has been involved in hundreds of embedded projects – from design to manufacturing to software support.

The post Three Steps to a Successful Embedded Product appeared first on Qt Blog.

Fast-Booting Qt Devices, Part 1: Automotive Instrument Cluster

$
0
0

When creating embedded devices, one crucial part of the user experience is also the boot-up time of the system. In many industries, such as Automotive, there are also hard requirements for the boot-up time.

We get a lot of questions like “How quickly can devices built with Qt start?”,  “Can you boot up a system to a Qt application under 2 seconds?”, or, “What’s the minimum time required for a system to start a Qt application?”

Unfortunately, there is no single answer for the questions. Thus, we decided to create three blog posts to open up a bit how to optimize your Qt-powered embedded systems. In this first part, we will look into automotive instrument clusters as a specific use case and show our instrument cluster showcase fast-booting with some benchmark statistics. The second part will focus more on the practical ‘how’ part: optimizing the Qt, and especially Qt Quick, application code. Finally the third part will discuss more about the hardware side of boot-up optimization.

The Qt Automotive Instrument Cluster Showcase

In the automotive industry, there is a clear need for fast boot-up times. For example, in the digital instrument clusters, it is crucial to start as quickly as possible: When the car is started, the instrument cluster needs to be up, running and reacting almost immediately. Besides fluent user experience, this is also something controlled by official safety requirements. At the same time, as the clusters get more digitalized and graphically much more verbose with e.g. 3D contents, this is something that requires extra attention from the software design and the underlying platform/framework. To demonstrate that fast-booting instrument clusters is possible by using Qt, we took our Embedded world 2016 instrument cluster demo setup and optimized it.

Take a look at the video to see the results:

The original boot-up time of the demo setup was a bloated 23 seconds. After the optimization, we were able to get the first parts of the screen visible already in around 1.5 seconds, depending on what services are required.

Analyzing the time consumption

From the video we have analyzed and approximated where the time goes:

boot_time3

The complete boot time from power off to first frame visible is 1 560 ms. As you see, majority of the time is actually spent on other items than the Qt application contents. For those parts, we still think that time could be even shorter by selecting a different hardware that can be powered up faster. Also, the board could have a faster memory and memory bus. The u-boot could still be stripped down more or replaced with a custom boot loader. But a lot of improvement was done in general, especially for the Qt Quick application optimization.

Prioritizing views

First and the most important thing before starting the optimization is to think what do you want the user to see first. Then start your application UI with that thing only, and lazy-load the rest dynamically. We selected that user must see the frame of the instrument cluster, and after the frame is drawn, the rest of the components are loaded dynamically. First, we load the gauges and only after that, the 3D car model.

“Cheating” by pre-loading

One way to improve the user experience is to “cheat” a bit on when to actually start the loading, if the startup has to be even faster than is possible with an optimized boot. For the instrument cluster use case, you could actually do the startup when the car door is unlocked or opened. In such approach the system is already running by the time the driver enters the car, and system only needs to engage the back light of the instrument cluster when driver switches power on.

How was the Qt part optimized in practice?

For the practical how-to optimization of the Qt Quick application, I’m going to need you to wait for the next part of the blog post series. In part two, we will talk about the practical tips and tricks around optimizing the Qt application.

Stay tuned!

The post Fast-Booting Qt Devices, Part 1: Automotive Instrument Cluster appeared first on Qt Blog.

Creating Certified Systems with Qt

$
0
0

When there is a need to use Qt as part of a system requiring functional safety certification, it is typically best to analyze if the safety critical functionality can be separated. This approach allows the safety critical functionality to be implemented without the need to certify the complete software stack of the system.

In my earlier blog post I explained the fundamentals of functional safety as well as introduced some of the industry specific standards for functional safety. If you missed that, I recommend to read it first. In this second blog I will explain what certification of Qt would mean in practice and present two architectures for creating a functional safety certification for a Qt based system.

What about Certification of Qt for Functional Safety?

Qt has been used in various systems requiring certification, and a bit over a year ago we decided to investigate if and how Qt itself could be certified for functional safety. The analysis was mainly conducted against IEC 61508, which sets the baseline for industry specific standards on functional safety, and automotive ISO 26262. Two separate pre-studies were conducted by The Qt Company together with VTT Expert Services, who is a certification authority for various standards, including IEC 61508. Based on these pre-studies and analysis of IEC 61508 and ISO 26262 it was concluded that it is in essence possible to certify core parts of the Qt framework for functional safety. However, balancing the effort, cost and needed functional changes related to the certification effort, we believe that aiming to get safety certification of the Qt framework is not sensible or meaningful in technical nor business wise. Our aim is to make it easier than before to use Qt in certified systems, and to work with our customers to achieve the certificates they need.

It is possible to use Qt as an integral part of a system certified for functional safety, without changes to the Qt framework. When the safety critical functionality of the certified system is adequately separated, it is not necessary to certify other parts of the system software. Thus it is possible to create a certified system with Qt without certifying the Qt libraries. In the next chapters, I will cover how to achieve the needed functional safety with a system containing Qt and present two alternative system architectures for this purpose.

Using Qt in Safety Critical Systems

If a product, for example an embedded system, contains both certified and non-certified components, it is crucial to adequately separate these. If the safety critical software can’t be reliably separated from the other software, the whole system should be certified for functional safety. In practice it is typically much better choose a system architecture that separates safety critical functionality from the other functionality. Such separation is beneficial especially in cases when the system contains a complex UI and application functionality.

Leveraging a Real Time Operating System

One approach is to leverage a certified real-time operating system for separation of the certified and non-certified software. This is well feasible as Qt supports several real time operating systems that have a proven way to separate the safety critical and other functionality. This approach is especially good in applications where the unique capabilities of the used real time operating system are a good fit for the application’s needs, i.e. the needed functionality of the application is feasible with the chosen real-time operating system.

functional_safety_rtos

Figure 1. Using a certified RTOS to separate safety critical functionality.

With this architecture the real-time operating system provides the capability to separate safety critical and other processes. The overall system architecture leverages just one operating system kernel managing both safety critical and other processes. Typically, the real-time operating system can also provide a guarantee that certain functions get run when needed, a feature often beneficial in creation of systems that need certification. As the operating system fully separates the functionality which is not safety critical, certification is needed only for the safety critical parts. Many real time operating systems offer a certified or pre-certified kernel and toolchain, thus using these saves time and effort in system level certification.

Leveraging a Hypervisor

Another system architecture for separation of the non-certified components from the certified, safety critical, parts of the embedded system is to use a hypervisor. In this approach a hypervisor is leveraged to run two or more different operating systems. For example, one for safety critical functionality and one or more for the other parts of the system. Typically, a so called Type 1 or bare-metal hypervisor is a good approach for embedded systems that need to run a separate operating system for certified and non-certified functionality.

functional_safety_hypervisor

Figure 2. Using a certified hypervisor to separate safety critical functionality.

Certified functionality can run on a simple real time operating system that has sufficient capabilities to provide the needed safety functions. Non-certified functionality can run for example on embedded Linux, which offers the possibility to utilize all the capabilities of Qt. As the real time operating system does not need to run Qt or any other complex components, it can be simpler and more lightweight than in the architecture that leverages a real time operating system for separation (and to run Qt).

Having the certified parts as streamlined as possible benefits a lot, as it is easier to implement the requirements of safety critical software and there is no need to certify functionality that is not safety critical. Additionally, using a regular embedded Linux for the functionality that is not safety critical typically makes it easier to implement complex functionality, which can result in significant cost savings for the overall system.

Achieving Certified Graphics

In a Qt based system it is common that the UI functionality is provided by Qt. If there is no safety critical UI functionality, the approach for graphics is very straightforward: Qt can handle everything related to the display without impact to any of the safety critical functionality.

However, if the safety critical functionality includes some UI related items, for example a warning light or a message, it needs to be handled by the safety critical part of the system. This means that the architecture has to allow sharing of the screen between safety critical and regular UI components in a way that guarantees operation of the safety critical functionality.

One approach to achieve such co-existence in the display is to leverage hardware layers of the system to guarantee that the safety critical UI component is drawn to the topmost layer. With this approach the safety critical UI is visible despite of a possible failure condition in the other functionality, as any possible failure in other functionality does not overwrite it. Another possibility would be to handle all of the UI composition in the safety critical part of the system. Having composition completely done within the certified functionality slightly complicates the overall graphics architecture, but is a possibility.

If there is dynamic content mandating hardware acceleration required also for the safety critical functionality, usage of an OpenGL SC certified GPU software needs to be considered. Such approach makes the system design more complex, so it is always easier if the UI requirements of the safety critical functionality are limited. Selection of the architecture for safety critical user interface components depends on the capabilities of the used hardware and operating system, as well as the application use case.

If you are interested in discussing more on the creation of functional safety certified systems with Qt, please contact us.

The post Creating Certified Systems with Qt appeared first on Qt Blog.

Viewing all 111 articles
Browse latest View live