Projects


A Brain Computer Interface System to Control Wheelchair for Severe Mobility Impaired Persons


This project focuses on building a Brain Computer Interface (BCI) system that can assist severe mobility-impaired persons by enabling wheelchair control through neural signal interpretation. The core objective is to develop a safe, reliable, and adaptive assistive platform that improves independence and quality of life.

Timeline: 2025 - Present
Reference no: EWUCRT-RG-17(14)/2025(5)
Funding/Host: East West University Center for Research and Training (EWUCRT)


BRAIN-IoT (EU H2020): dependable sensing and actuation in decentralized IoT systems


BRAIN-IoT focuses on complex scenarios where actuation and control are cooperatively supported by populations of IoT systems. The breakthrough targeted by BRAIN-IoT is to establish a framework and methodology supporting smart cooperative behaviour in fully de-centralized, composable and dynamic federations of heterogeneous IoT platforms. BRAIN-IoT tackles future business-critical and privacy-sensitive IoT scenarios subject to strict dependability requirements. In this complex setting, BRAIN-IoT enables smart autonomous behaviour in IoT scenarios involving heterogeneous sensors and actuators autonomously cooperating in complex, dynamic tasks. This is done by employing highly dynamic federations of heterogeneous IoT platforms able to support secure and scalable operations for future IoT use cases, backed by an open decentralized marketplace of IoT platform and smart features, supporting runtime deployment and reconfiguration. Open semantic models are used to enforce interoperable operations and exchange of data and control features, supported by model-based development tools to ease prototyping and integration of interoperable solutions. Overall, secure operations are guaranteed by a consistent framework providing AAA features in highly dynamic, distributed IoT scenarios, joint with solutions to embed privacy awareness and control features. The viability of the proposed approaches is demonstrated in two futuristic usage scenarios, namely Service Robotics and Critical Infrastructure Management, as well as through a series of proof-of-concept demonstrations in collaboration with on-going IoT large-scale pilot initiatives.


MONSOON (EU SPIRE): model-based optimization for data-intensive industrial processes


MONSOON focuses on the optimization of data-intensive processes in manufacturing and process industries, which are characterized by a high degree of complexity and variability. The project aims to develop a model-based control framework that can optimize the performance of these processes while ensuring their stability and robustness. The framework will be based on advanced modeling techniques, such as machine learning and system identification, and will be designed to handle the large volumes of data generated by these processes. The project will also develop a set of tools and algorithms for real-time monitoring and control of data-intensive processes, which will be validated through a series of industrial case studies.


Automated Knowledge Base Quality Assessment


In recent years, numerous efforts have been put towards sharing Knowledge Bases (KB) in the Linked Open Data (LOD) cloud. These KBs are being used for various tasks, including performing data analytics or building question answering systems. Such KBs evolve: their data (instances) and schemas can be updated, extended, revised and refactored. However, unlike in more controlled types of knowledge bases, the evolution of KBs exposed in the LOD cloud is usually unrestrained, what may cause data to suffer from a variety of quality issues, both at a semantic (contradiction) and at a pragmatic level (ambiguity, inaccuracies). This situation affects negatively data stakeholders – consumers, curators, etc. –. Data quality is commonly related to the perception of the fitness for use, for a certain application or use case. Therefore, ensuring the quality of the data of a knowledge base that evolves is vital. Since data is derived from autonomous, evolving, and increasingly large data providers, it is impractical to do manual data curation, and at the same time, it is very challenging to do a continuous automatic assessment of data quality. Ensuring the quality of a KB is a non-trivial task since they are based on a combination of structured information supported by models, ontologies, and vocabularies, as well as queryable endpoints, links, and mappings. Thus, in this project, we explored two main areas in assessing KB quality: (i) quality assessment using KB evolution analysis, and (ii) validation using machine learning models.