From code to traction: Architecting the next-generation software for Powered-Two-Wheelers

Dr. Christos Kotselidis, Senior Architect

PIERER Innovation GmbH (former: KTM Innovation GmbH)

Digital Twins in Automotive

The ever-increasing digitalization activities across all aspects of our daily lives has found its way into the automotive sector in the form of digital twins. In this context, personalized driving experiences along with advanced safety and security features are deployed in modern vehicles and are paired with their drivers in both autonomous and/or semi-autonomous ways. At the core of this enhanced driving experience is the deployment and optimization of numerous on-board vehicle sensors coupled with advanced processing capabilities either on-board or on back-end cloud/data services. The always-connected vehicles communicate both with services that perform algorithmic calculations and collect, aggregate, collate data from within the Powered Two Wheeler (PTW) or with other vehicles in proximity (V2X). All these actions have as objective to enhance the driving experience while minimizing safety risks and environmental impact.

PTWs that include both motorcycles and e-bikes adopt, to an extent, the trends and latest developments from the automotive domain in an effort to enhance the riders’ experiences via sophisticated hardware/software co-design; a tremendous challenge that requires different solutions compared to classic automotive.

Software/Hardware challenges in the domain of PTWs

The deployment of a hardware/software solution on a PTW faces diverse challenges compared to a car deployment mainly due to physical and environmental issues. In general, the physical dimensions of a PTW constraints significantly both the amount and the type of hardware that can be deployed. In addition, the fact that the chassis of a PTW is mostly exposed to weather elements, results in a wide spectrum of temperature, dust, and vibration tolerances that need to be factored in the housing enclosures. The inevitable water and dust proof enclosure results in the lack of active cooling, which reduces significantly both the power and performance envelope of deployed solutions. The constrained performance and power envelope directly affects the type, complexity, and capabilities of use cases and executed code on-board. This is in stark contrast with automotive where both the physical volume of a car and its more advanced sensor systems can accommodate high-performance computations on-board.

Software Implications

The constrained computational resources on a PTW require a balancing act between what computations shall and can be performed on-board and which computations shall be relayed to the back-end infrastructure after transferring data via a cellular or a WiFi connection. Based on the decision, different parameters come into play which creates a pareto-optimal point inside the parameter space of latency, cost, performance, dependability, and security.

For example, the more performance we need, then we have to relay computations to the back-end which increases both costs (due to data transferring via a mobile provider) and latency since data has to be transferred, processed, and results to be returned back to the PTW. Similarly, if we decide to perform the computations on-board, we save costs and reduce latency, but we must limit the scope of the computation due to the limited CPU resources. In order to solve this challenge one can follow two routes: a) pre-select which computations will run on-board and which on the back-end, or b) follow a dynamic approach in which this selection can be applied on-demand automatically or semi-automatically.

Naturally, the second solution is more appealing as it allows us to react to changes during a ride. For example, in a tour scenario where a PTW is cruising on a highway, data shall not fluctuate very much and hence a sparser sampling period might be sufficient to perform computations on board. On the other hand, on a test-track where a high-end motorbike is finely tuned a more rapid data processing across aggregated data from multiple sensors may be required; a computational scenario that may not be accommodated by on-board processing.

The ELEGANT vision

Designing a selective strategy (i.e. which code to run where and under which conditions), is very challenging due to both the implementation complexity and the ad-hoc customization needed (per function) for each vehicle. In addition, if someone wants to perform complex data pipelines, the situation gets even worse. Typically, the software development teams that write software for the embedded on-board system and for the back-end services are using different tools and programming frameworks. For example, a simple filtering function that filters out erroneous or useless signals (frames) from a CAN bus [1] stream, will be implemented two times – in C/C++ for the embedded platform and in Python or Java for a backend microservice that provides this functionality. Evidently, this code duplication and fragmentation results in both higher costs (both development and operational) and, most importantly, to missed optimization opportunities. Thankfully, the ELEGANT project aims to address this challenge via its unified programming API.

ELEGANT develops a unified programming API for IoT/Cloud deployments with the unique capability of expressing complex data pipelines which can be fulfilled by both in-built data operators and by user-defined ones. In addition, those operators can be automatically accelerated on GPUs, FPGAs, or other hardware accelerators [2] present on the deployed systems. Finally, and most importantly, the in-build ELEGANT orchestrator [3] will allow for automatic selection of “where” to execute “what” without having to alter the code at all!

To give an example, below you can find simple data pipeline developed by the OEM that allows the filtering and aggregation of lean angle values retrieved from the CAN bus using the NebulaStream API [4] that is part of the ELEGANT software stack.

1. auto data = Query::from(“Motorbike”); // Initiate stream from 2. // motorbike’s CAN bus

3. data.filter(Attribute(“lean_angle”) > 10); // Filter out lean angle 4. // values < 10 degrees

5. data.apply(Aggregation(avg(“lean_agle”))) // Average lean angles

6. data.addSink(KAFKASinkDescriptor::create("")); 7. //Add results to a Kafka Stream

As shown in the NebulaStream pipeline above, we first retrieve the CAN frames from the CAN Bus of a motorbike and in turn we want to filter out any lean angle values less than 10 degrees (essentially, we want just to keep the values for when the bike is in a curve). After that, we average the lean angle values, and we write the results to an Apache Kafka stream.

From this small example we can notice that we have managed a fairly capable data pipeline to be described with only four statements. The actual implementation of the invoked operators is handled transparently by NebulaStream. In addition, the users have the ability to add their own custom operators in the form of UDFs (User Defined Functions – not shown in this example). Finally, the same pipeline can be deployed on edge and on the cloud since the compilation and binary generation is handled again transparently by Nebula Stream.


As the various elements of the ELEGANT stack evolve, more capabilities are added to its various layers making its applicability wider and its features more accessible from various domains that require IoT/Cloud interoperability.