R&D Projects

R&D Projects

Projects currently in progress:

NeurAdapt”, funded by the EU Horizon 2020 RIA Programme, via the AI4Media Open Call #1 issued and executed under the AI4Media project (Grant Agreement no. 951911). Learn more at  ai4media.eu

Description: The goal of NeurAdapt project is to explore a new path in the design of deep CNNs, which could enable a new family of more efficient and adaptive models for any application that rely on the predictive capabilities of deep learning. Inspired by recent advances in the field of biological Interneurons that highlight the importance of inhibition and random connectivity to the encoding efficiency of neuronal circuits, we investigate the mechanisms that impart similar qualities to artificial CNNs. Established techniques such as Channel Gating, Channel Attention and calibrated dropout, can offer tools to formulate a novel building block for CNN models that expands the functional diversity of the standard convolutional layer. Furthermore, the stochastic nature of neuronal activity, has the potential to enable the training of models with parametrized levels of sparsity, offering the capacity to control inference/complexity tradeoff on-the-fly, without any need for additional finetuning. The main outcome is a new methodology for designing efficient deep CNN architectures regardless of the specific task and target domain. Furthermore, NeurAdapt aims to create new knowledge regarding the dynamic behavior of the excitation-inhibition mechanisms in feed-forward DNNs, their capabilities for further development and the respective limitations.

LOLIPOP | Lithium NiObate empowered siLIcon nitride Platform for fragmentation free OPeration in the visible and the NIR”, is a EU funded Horizon photonic integration project (G.A. 101070441). Learn more at  horizon-de-lolipop.eu

Description: LOLIPOP is a photonic integration project that aims to enable the silicon nitride platform to make the next step and fully flourish. The goal is to develop a disruptive platform that will offer the highest integration, modulation and second order nonlinear performance in the entire spectrum from 400 up to 1600 nm, based on the combination of the LNOI and the silicon-nitride (TriPleX) technology.

Our Role: Irida Labs is responsible for the design and training of the image recognition and classification modules. To fulfil this task, we engage technologies from the field of TinyML which encounters techniques for supper efficient computing for Machine Learning.

Keywords:
Embedded Vision Library, Deep Learning, Machine vision, Commercial Exploitation, Business Plan

EV-Lib – Technological development of embedded deep learning software for machine vision systems and commercial exploitation in global markets”, co-financed by the EU and National funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE – INNOVATE-2, project no. Τ2ΕΔΚ-03212. Learn more at  EV-lib

Description: The main goal of the EV-Lib project is to develop innovative vision AI solutions for Industry4.0 and industrial automation applications like human presence in industrial spaces, product (e.g. pallet) detection, counting and tracking, product recognition and interaction etc. Within the EV-Lib project, Irida Labs aims to (1) Expand the technological evolution of our vision AI library to stay at the cutting edge of technology, remaining competitive worldwide; (2) Design a feasibility and market analysis study for the EV-Lib product and explore a business model in relation to its further commercial exploitation; (3) Develop a feasibility and market analysis study for the EV-Lib product in relation to further commercial exploitation in different geographic markets; (4) Enhance the IP portfolio with patents at USPTO etc.; (5) Participate in international exhibitions and individual B2B meetings for the promotion of EV-Lib.

A Hybrid (Additive and Subtractive) Processing Robotic system”, co-financed by the EU and National funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE – INNOVATE, project no. Τ2ΕΔΚ-03896. Learn more at  hybridr-project.eu

Description: Additive Manufacturing (AM) technologies have shown a rapid development over the last decade. Metal AM is now adopted in numerous industries, such as naval, energy, aerospace and automotive, providing a very flexible solution for processing of exotic materials (Ni-alloys, Titanium, etc.) and enabling repair of components that would not be possible with conventional processing. However, AM cannot provide on its own parts that will meet the requirements of the aforementioned industries. To this end, subtractive processes have to be combined in a hybrid solution, in order to achieve the required quality for the part, in terms of dimensional accuracy and surface finish. HybridR develops an innovative robotic-based platform, which incorporates Laser Metal Deposition, milling and grinding as enabling technologies. The use of a robot as a base structure will provide a very high flexibility to the system and will showcase that robots can have a very high potential in the manufacturing industry.

Our Role: Vision-based monitoring and quality assessment in Laser Metal Deposition, Milling and Grinding (i.e. melt pool monitoring and parts’ geometry) on a robotic-based Additive manufacturing platform.

Development of an innovative and flexible system of terrestrial meteorological, atmospheric and solar measurements with the synergy of physical models and computer vision and deep learning techniques”, co-financed by the EU and National funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE – INNOVATE, project no. Τ2ΕΔΚ-00681. Learn more at  deepsky-project.com

Description: The main goal of the DeepSky project is to develop an innovative and flexible ground-based measurement system designed to address the end-use needs of end-users in the meteorological, atmospheric and solar energy communities. DeepSky will build on the high scientific potential and technological advances in the field of instrumentation based on the analysis of digital images of the celestial dome, the modeling of the propagation of solar / thermal radiation and the contribution of computational vision and deep learning methods.

Our Role: Computer vision and deep learning techniques for the analysis of digital all-sky images (cloud types and cloud coverage, cloud prediction etc.).

Fluently ” – The essence of human-robot interaction”, funded from the EU’s Horizon 2020 research and innovation programme (HORIZON-CL4-2021-TWIN-TRANSITION-01-01) under grant agreement No. 958417.  Learn more at  fluently-horizonproject.eu

Description: Fluently leverages the latest advancements in AI-driven decision-making process to achieve true social collaboration between humans and machines while matching extremely dynamic manufacturing contexts. The project results will be (1) Fluently Smart Interface unit and (2) the Robo-Gym. The Fluently Smart Interface unit features: (a) interpretation of speech content, speech tone and gestures, automatically translated into robot instructions, making industrial robots accessible to any skill profile; (b) assessment of the operator’s state through a dedicated sensors’ infrastructure that complements a persistent context awareness to enrich an AI-based behavioural framework in charge of triggering the generation of specific robot strategies; (c) modelling products and production changes in a way they could be recognized, interpreted and matched by robots in cooperation with humans.

Our Role: Irida Labs is responsible for development of vision AI solutions for product / part detection, recognition and tracking, defect detection for quality assurance, and pose estimation of parts / products in industrial environment, using RGB and TOF sensors.

“Prometheus – Context-aware Adaptive 3D Projection based on Motion and Activity Estimation for the Immersive and Interactive Experience of Ancient Drama Performances”, co-financed by the EU and National funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, GSRT, project no. Τ6YΒΠ-00490. Learn more at   ntng.gr

Description: PROMETHEUS project develops on three major directions, which converge towards the revival of older and highly important ancient drama plays, with an educational and experiential character through a life theater play or in an open living lab installed at the National Theater of Northern Greece: (i) creation of a digital library of 3D objects used in older ancient drama plays; (ii) design and implementation of innovative computer vision and 3D projection tools, which can be used by the various stakeholders (scenographer, costume designer, etc.) to integrate augmented reality in new plays and build immersive spaces for the actors; (iii) design and implementation of technologically advanced tools which allow to the public interact with the play and the actors, and at the same time understanding better the historical and cultural concepts promoted by the ancient drama plays. The objective is achieved by implementing PROMETHEUS, a novel and integrated system based on computer vision and 3D projection techniques, which will allow the creation of 3D digital objects with interactive and dynamic properties and their integration in new ancient drama plays in an augmented reality immersive space.

Our Role: Novel vision AI techniques based on computer vision and 3D projection, for the creation of 3D digital objects.

Keywords: Smart cities, smart surveillance, embedded machine vision, deep learning, IoT devices

SmartAEye – Embedded machine vision software for IoT devices targeting to “smart cities” and “smart surveillance” market, based on deep learning techniques”, co-financed by the EU and National funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE – INNOVATE-2, project no. Τ2ΕΔΚ-00681. Learn more at   iridalabs.com/smartaeye

Description: SmartAEye develops an embedded machine vision software based on deep learning techniques targeting to be integrated to IoT devices in the “smart cities” and “smart surveillance” markets. The SmartAEye software will be embedded on different hardware platforms (CPU, GPUI or full SoCs) to demonstrate both its functionality and its commercial exploitation. Functionalities include people detection and counting for protecting privacy issues (GDPR Compliant) and statistical purposes in internal or external environments, detection and classification of parking lots (smart parking application), vehicle detection and counting etc. SmartAEye goals are to (1) Create a complete set of data for different “smart city” and “smart surveillance” scenarios, (2) Develop machine vision technology based on AI and deep learning, (3) Exploit of deep learning techniques and Irida’s know-how in training of neural network modules and real-time implementation of deep learning architectures for commercial purposes, and (4) Pilot implementation of the functionalities in embedded systems (edge devices) using heterogeneous computing techniques.

Vision-X – Integrated computer vision and AI platform ”, co-financed by the Region of Western Greece, 2022.

Description: By the year 2022, the Internet of Eyes will be larger than the Internet of Things (IoT) with over 44 billion cameras operating in consumer and industrial applications. Most of these cameras will not produce images for the human eye to see. It is estimated that over 7 billion of these cameras will be deployed in embedded applications and systems and will incorporate AI-based computer vision solutions. The field of applications and markets is particularly broad and includes robotics, logistics, smart cities, smart retail, the modern industry of Industry 4.0 and many others. However, the development of such integrated products and solutions based on AI and computer vision requires a broad and deep understanding in areas such as embedded hardware, system design, embedded software, as well as AI with machine learning.

The VisionX platform aims to provide a comprehensive answer regarding the embedded software required for IoT devices or solutions that will integrate computer vision and artificial intelligence. Based on years of experience in advanced machine learning, design of computer vision systems with cameras, data management and embedded programming, the VisionX platform aims to harness the power of AI processing in edge devices to lead to IoT devices or solutions that will be ~100x faster in processing and ~10x less in consumption, without reduced performance.

Completed projects:

Zero-defect wELDing for e-mobility”, funded under EIT Manufacturing, grant agreements no. 21122 (year-2021) and no. 22204 (year-2022). Learn more at  zeld-e.eu

Description: ZELD-e aims to reconfigure and update the existing monitoring and control schema of the welding processes involved in tab-to-tab (T2T) and tab-to-busbar (T2B) joining of battery packs (BP) used in EVs, in order to increase the joint quality, reduce and even eliminate defective parts, optimize equipment’s productivity, energy consumption and minimize development time and time-to-market. The proposed solution/system is based upon a multilevel approach, including the enhancement of the sensorial configurations, the data acquisition (DAQ), and control functionalities located at the edge (shop floor), backed up by a centralized web-based platform with visualization, Quality Assessment (QA) and data processing/analysis capabilities paired with a long-term control optimization schema.

Our Role: Vision-based monitoring, quality inspection (defect detection) using vision AI techniques and control schema for QA of laser welding processes in battery packs.

Artificial Intelligence Applied to Space”, funded under H2020, grant agreement no. 776262. Learn more at  aida-space.eu/

Description: AIDA brings a transformational innovation to the analysis of heliophysics data in four steps: (1) Develops a new open source software called AIDApy written in Python and capable of collecting, combining and correlating data from different space missions. (2) Introduces modern data assimilation, statistical methods and machine learning (ML) to heliophysics data processing. (3) Combines real data from space missions with synthetic data from simulations developing a virtual satellite component for AIDApy. (4) Deploys in AIDApy methods of Artificial Intelligence (AI) to analyse data flows from heliophysics missions.
AIDA uses the new AIDApy in selecting key heliophysics problems to produce a database (AIDAdb) of new high-level data products that include catalogs of features and events detected by ML and AI algorithms. Moreover, many of the AI methods developed in AIDA will themselves represent higher-level data products, for instance in the form of trained neural networks that can be stored and reused as a database of coefficients.

Our Role: Development of Python-based AI framework (incl. AI model optimizations) for analysing data flows in space weather data.

Analysis of sign language on mobile devices with focus on health services”, co-financed by the EU and National funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE – INNOVATE, project no. Τ1ΕΔΚ-01299. Learn more at  xanthippi.ceid.upatras.gr/HealthSign

Description: HealthSign proposes to develop an application for the automated interpretation of the Greek Sign Language (GSL) over internet with focus on the health services, which are the most common reason to seek for an interpreter. The high demand for interpreters is often not met or requires long waiting. Furthermore, the availability of interpreters facilitates the integration of the deaf community. Vision is probably the only sensor modality that could be of practical use because (a) only vision can capture manual and non-manual cues, which provide essential information for SLR, (b) camera-equipped hand-held devices with powerful processors are a commodity nowadays and (c) recent advances in computer vision and machine learning render mainstream visual SLR a realistic option.

Our Role: Development of embedded vision AI techniques for hand/head pose detection and estimation, and sign language recognition and interpretation using TinyML on edge device.

Real-time cloud-based quality assessment in materials welding”, co-financed by National and European funds, Operational Program 2014-2020, project no. ΔΕΡ5-0019990. Learn more at  realm-project.eu

Description: The research proposal of the REALM project concerns the development of a single cloud technology platform for monitoring and evaluation of the thermal welding process. The system will collect and record information and data during the process. Through the cloud technology platform, the data and results of the welding process will be available to production engineers.

Software framework for runtime-Adaptive and secure deep Learning On Heterogeneous Architectures”, funded under H2020, grant agreement no. 7807883.

Learn more at  aloha-h2020.eu

Description: Deep Learning (DL) algorithms are an extremely promising instrument in artificial intelligence. To foster their adoption in new applications and markets, a step forward is needed towards the implementation of DL inference on low-power embedded systems, enabling a shift to the edge computing paradigm. The main goal of ALOHA is to facilitate implementation of DL algorithms on heterogeneous low-energy computing platforms providing automation for optimal algorithm selection, resource allocation and deployment.

Our Role: Deep learning at the edge device with adaptive framework based on Parsimonious Inference concept

Next generation bionics and smart prosthetics”, funded under H2020, grant agreement no. 678144.

Learn more at  symbionicaproject.eu  and  imveurope.com

Description: The Symbionica project aims to develop a reconfigurable machine to make personalised bionics and prosthetics using additive and subtractive manufacturing. The Symbionica system uses a vision-based solution for closing the loop in terms of process control in Additive Manufacturing (AM) .

The solution is based on monitoring the AM process in real time with a comprehensive vision system, which interacts with the machine process algorithms to detect and correct deposition errors. The system should improve the accuracy of printed parts and their material properties, leading towards zero-defect AM.

Our Role: Vision-based solution (i.e. melt pool monitoring and QA in 3D parts’ geometry) for closing the loop in Additive Manufacturing (AM)

borealis logo

The 3A energy class Flexible Machine for the new Additive and Subtractive Manufacturing on next generation of complex 3D metal parts”, funded under H2020, grant agreement no. 636992.

Our Role: Vision-based solution (i.e. melt pool monitoring and QA in 3D parts’ geometry) for closing the loop in Additive Manufacturing (AM)

Time, Energy and security Analysis for Multi/Many-core heterogenous PLAtforms ”, funded under H2020, grant agreement no. 779882.

Learn more at  teamplay-h2020.eu

Description: The TeamPlay project aims to develop new, formally-motivated, techniques that will allow execution time, energy usage, security, and other important non-functional properties of parallel software to be treated effectively, and as first- class citizens. We will build this into a toolbox for developing highly parallel software for low-energy systems, as required by the internet of things, cyber-physical systems etc.

The solution is based on monitoring the AM process in real time with a comprehensive vision system, which interacts with the machine process algorithms to detect and correct deposition errors. The system should improve the accuracy of printed parts and their material properties, leading towards zero-defect AM.

Our Role: Energy-Time-Security (ETS) optimization of Deep Learning models using Teamplay tools

White room based on Reconfigurable robotic Island for optoelectronics”, FP7 project funded under FoF.NMP.2013 – 2.

Learn more at  cordis.europa.eu/project/id/609228

Description: The young optoelectronic industry has critical mass and already impacts for more than the 10% on the European economy, employing 290 000 people and guarantees a stable double digit growth in current and coming years. Europe is playing a leading role in R&D (>1,000 research organization active) and is still able to face Far East and American competitors in manufacturing. white’R is a necessary action to translate this R&D excellence into future leadership in manufacturing high value added optoelectronic devices. white’R production island aims to make a move away from the manual assembly processes that have characterized the industry for decades to high-accuracy, high-yield, automated methods.
The new manufacturing concept is based on the combination of fully automated, self contained, “white room” modules whose components – robots, end effectors, transport, handling and tooling systems – are conceived as “Plug&Produce” mechatronic sub-modules properly configured coherently with the production requirements. The technical objectives of white’R system are: 50% reduction of cost compared to current productions system; 30% set-up and ramp-up time reduction by self adaptive reconfigurability; All components of the production system reusable re-assembled and upgraded in a new different system; Creation of a EU/International standard for optoelectronic package configuration.

Our Role: Mapping and micro-localization of industrial components on Laser Technology using computer vision

MANTO

Innovative blind escort applications for autonomous navigation outdoor and in museums” , co-financed by the EU and National funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE – INNOVATE, project no. Τ1ΕΔΚ-00593.

Technological development and commercial exploitation of embedded computer vision and machine learning system in a global scale”, co-financed by the EU and National funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE – INNOVATE, project no. Τ1ΕΔΚ-01489.

Development of an embedded system for road condition monitoring based on computer vision and deep learning techniques”, co-financed by the EU and National funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE – INNOVATE, project no. Τ1ΕΔΚ-05231.

Analysis, modeling and multi-spectral sensing for the predictive management of verticillium wilt in olive groves”, co-financed by National and European funds, Operational Program 2014-2020, project no. ΔΕΡ6-0022623.

Spectral Evidence of Ice”, co-financed by National and European funds, project no. MNET18/ICT3438.

Implementation of a smart visual recognition software for mobile devices, for touristic applications”, co-financed by the EU and National funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE – INNOVATE, project no. Τ1ΕΔΚ-01794.

Skip to content