<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Active_project | BIASlab</title><link>http://biaslab.github.io/tag/active_project/</link><atom:link href="http://biaslab.github.io/tag/active_project/index.xml" rel="self" type="application/rss+xml"/><description>Active_project</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 01 Dec 2025 00:00:00 +0000</lastBuildDate><item><title>AIM-TT</title><link>http://biaslab.github.io/project/aim-tt/</link><pubDate>Mon, 01 Dec 2025 00:00:00 +0000</pubDate><guid>http://biaslab.github.io/project/aim-tt/</guid><description>&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="http://biaslab.github.io/img/projects/aimtt-logo-tag.png" alt="" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;AiMTT aims to cultivate a highly skilled and diverse AI talent pool equipped to address the opportunities and challenges of AI in mobility, transport, and logistics. By combining real-world case studies with knowledge development, this initiative fosters deep expertise in the field.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Mobility, transport, and logistics face a multitude of challenges: traffic congestion, livability concerns, conflicts between user, operator, and public interests, space constraints, and safety risks during large-scale events. These challenges are further complicated by their deep interconnections, making them particularly difficult to resolve.&lt;/p&gt;
&lt;p&gt;While such complexities can be overwhelming for the human mind, they offer ideal use cases for artificial intelligence (AI). AI can process vast amounts of data in real time, provide accurate network assessments, calculate impacts in future scenarios, and optimize interventions. It also enhances our understanding of human behavior and the mobility system as a whole.&lt;/p&gt;
&lt;p&gt;Given these advantages, leveraging AI to address mobility challenges is a logical next step. Through AiMTT, we embrace a “learning by doing” approach to develop responsible, AI-driven solutions.&lt;/p&gt;
&lt;h2 id="vision-and-ambition"&gt;Vision and Ambition&lt;/h2&gt;
&lt;p&gt;AiMTT stands for AI Learning Initiative for Multi-modal Traffic and Transportation. It operates under the umbrella of &lt;a href="https://aic4nl.nl/" target="_blank" rel="noopener"&gt;AIC4NL&lt;/a&gt;, an organization dedicated to the responsible development and application of AI in the Netherlands. Ensuring responsible AI development is essential, as concerns about fairness, inclusivity, privacy, and human oversight continue to grow. Will AI-generated outcomes be equitable? Can privacy be safeguarded? How do we ensure that humans remain in control?&lt;/p&gt;
&lt;p&gt;AiMTT aims to address these critical questions by fostering a collaborative learning community that brings together experts from &lt;a href="https://aimtt.nl/about/partners" target="_blank" rel="noopener"&gt;academia, industry, and government&lt;/a&gt;. Project partners will build, test, and refine AI applications, with ethical considerations—such as fairness, privacy, and human autonomy—at the forefront.&lt;/p&gt;
&lt;h2 id="learning-process"&gt;Learning Process&lt;/h2&gt;
&lt;p&gt;AiMTT’s approach is grounded in practical application. AI solutions will be developed through &lt;a href="https://aimtt.nl/about/use-cases" target="_blank" rel="noopener"&gt;seven real-world use cases&lt;/a&gt;, each designed to create tangible tools that can be directly implemented. Equally important is the learning process itself: identifying best practices, analyzing challenges, and refining AI applications based on real-world insights.&lt;/p&gt;
&lt;p&gt;To support this, the project will offer workshops, training programs, and co-creation sessions—ensuring continuous knowledge exchange and improvement.&lt;/p&gt;
&lt;p&gt;Through AiMTT, we are shaping the future of urban mobility by responsibly integrating AI to create smarter, safer, and more efficient transportation systems.&lt;/p&gt;</description></item><item><title>DELTAS</title><link>http://biaslab.github.io/project/deltas/</link><pubDate>Mon, 01 Sep 2025 00:00:00 +0000</pubDate><guid>http://biaslab.github.io/project/deltas/</guid><description>&lt;p&gt;A collaboration between ASML B.V. and the Signal Processing Systems group, focusing on deep generative models for lithography and metrology. The project is headed by dr. Ruud van Sloun (TU/e) and dr. Alexandru Onose (ASML), and executed by Esther van Pelt. Dr. Harm Belt (TU/e + ASML) and dr. Wouter Kouw (TU/e) assist in supervision.&lt;/p&gt;</description></item><item><title>FlexLab</title><link>http://biaslab.github.io/project/flexlab/</link><pubDate>Fri, 01 Aug 2025 00:00:00 +0000</pubDate><guid>http://biaslab.github.io/project/flexlab/</guid><description>&lt;p&gt;&lt;strong&gt;FlexLab is an AI innovation lab that creates sustainable and societal impact by applying advanced AI technologies to enable flexible electricity consumption and ensure that this flexibility can be used to resolve congestion in medium- and low-voltage power grids. FlexLab achieves this by providing a secure environment in which startups and SMEs can test and validate their flexible energy solutions together with value-chain partners.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;In addition, FlexLab develops expertise in core AI technologies required for flexible electricity consumption and evaluates these technologies through short-cycle innovation trajectories. This approach helps address grid congestion effectively, improve grid reliability and stability, and contribute to the energy transition toward a CO2-neutral Netherlands by 2050.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;An initial prototype will be developed to apply Bayesian networking and deep reinforcement learning to behind-the-meter energy control. Bayesian networking increases the reliability of behind-the-meter energy management, and this work is carried out in collaboration with Zympler (formerly known as Simpl.Energy).&lt;/p&gt;</description></item><item><title>CONTACT-AI</title><link>http://biaslab.github.io/project/contact-ai/</link><pubDate>Sun, 01 Jun 2025 00:00:00 +0000</pubDate><guid>http://biaslab.github.io/project/contact-ai/</guid><description>&lt;p&gt;&lt;strong&gt;Challenge&lt;/strong&gt;. The International Labour Organisation reports over 300 million work-related accidents and diseases per year, with nearly 3 million being fatal (&lt;a href="https://www.ilo.org/resource/news/nearly-3-million-people-die-work-related-accidents-and-diseases" target="_blank" rel="noopener"&gt;ILO report&lt;/a&gt;). Embodied Artificially Intelligent (EAI) agents can reduce this drastically, by for example
inspecting construction sites or transporting cargo through hazardous areas. However, autonomously navigating unknown environments is difficult and requires adaptive decision-making. Suppose the agent detects a visually ambiguous obstacle: is it a crate that can be pushed away? Or a fence that needs to be navigated around? Rule-based algorithms and task-priority controllers could yield unsafe situations, while reinforcement learning (RL) requires enormous amounts of trial-and-error, potentially breaking the robot during training. The challenge is to design an EAI agent that cautiously and efficiently explores using multiple sensory modalities to find the best path through unknown terrain.&lt;/p&gt;
&lt;p&gt;
&lt;figure id="figure-figure-1-upon-detection-of-an-obstacle-external-uncertainty-vision-increases-this-uncertainty-will-be-transferred-to-internal-uncertainty-kinematic-the-agent-then-minimizes-kinematic-uncertainty-by-making-contact-with-the-obstacle"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="http://biaslab.github.io/img/projects/CONTACTAI-uncertainty.png" alt="Figure 1. Upon detection of an obstacle, external uncertainty (vision) increases. This uncertainty will be transferred to internal uncertainty (kinematic). The agent then minimizes kinematic uncertainty by making contact with the obstacle." loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Figure 1. Upon detection of an obstacle, external uncertainty (vision) increases. This uncertainty will be transferred to internal uncertainty (kinematic). The agent then minimizes kinematic uncertainty by making contact with the obstacle.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Solution framework&lt;/strong&gt;: brain-inspired multi-modal switching dynamics. We believe an agent should use touch for exploration when vision cannot resolve ambiguity in its environment (Figure 1). Mechanically, we envision an agent that transfers visual uncertainty about the external world (what is this obstacle in front of me?) to kinematic uncertainty internally (what will happen if I move my leg?), and then reacts with actions that minimize uncertainty (e.g., gently push object with leg). To create such an agent, we take inspiration from natural embodied intelligence and computational neuroscience, specifically Active Inference. An active inference agent operates on beliefs (probability distributions over unknown variables) and updates these using variational Bayesian inference when new data is observed. Using quantified uncertainty, actions are balanced between exploration (maximizing information gain during data acquisition) and exploitation (reaching a goal). It has been demonstrated to be a powerful framework for planning and navigation. Uncertainty also leads to caution: slow careful movements when uncertainty is high and rapid targeted movements when uncertainty is low.&lt;/p&gt;
&lt;p&gt;We propose to design an active inference agent for a quadrupedal robot that incorporates visual perception, planning, decision-making and sensorimotor control (Figure 2). The active inference module learns two sets of dynamics: loco-motion and loco-manipulation. Visual perception is passed as a belief, expressed in terms of a factorized probability distribution, to the active inference module. Visual uncertainty is merged with the uncertainty in the loco-manipulation dynamics, akin to sensor fusion. When that uncertainty becomes large, the agent favours actions that minimize it in the future, such as manipulating the unknown object with its leg. Since the initial uncertainty will be high, the agent will make contact cautiously. Uncertainty shrinks with contact and a stronger action, such as pushing the object away, will be chosen, leading to a potentially improved locomotion path. In summary, the proposed active inference agent will use multiple modalities (vision, touch) to cautiously resolve ambiguity in the world and navigate the environment more robustly.&lt;/p&gt;
&lt;p&gt;
&lt;figure id="figure-figure-2-active-inference-agent-overview-vision-based-uncertainty-about-the-world-affects-kinematic-uncertainty-and-triggers-a-switch-to-loco-manipulation-exploring-the-world-through-cautious-touch-planning-implemented-as-message-passing-on-a-forney-style-factor-graph-edges-are-random-variables-nodes-are-operations-of-switching-autoregressive-models"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="http://biaslab.github.io/img/projects/CONTACTAI-system.png" alt="Figure 2. Active inference agent overview. Vision-based uncertainty about the world affects kinematic uncertainty and triggers a switch to loco-manipulation, exploring the world through cautious touch. Planning implemented as message passing on a Forney-style factor graph (edges are random variables, nodes are operations) of switching autoregressive models." loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Figure 2. Active inference agent overview. Vision-based uncertainty about the world affects kinematic uncertainty and triggers a switch to loco-manipulation, exploring the world through cautious touch. Planning implemented as message passing on a Forney-style factor graph (edges are random variables, nodes are operations) of switching autoregressive models.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Implementation&lt;/strong&gt;. The agent will have a vision, a control and a decision-making module (Figure 2). The vision module runs a simultaneous localization and mapping algorithm as well as rudimentary object detection. The planning and navigation module will switch between locomotion and loco-manipulation. During locomotion, it generates targets for a gait controller and guides the robot along the planned path. During loco-manipulation, it plans a series of cautious contact-rich policies that maximize information gain on the object and whether it can be pushed away. We will use switching autoregressive models, that are explainable in terms of the effect of input sources on output prediction, and for which information gain can be calculated analytically. Computations are distributed by means of reactive message passing on a Forney-style factor graph. This ensures computation cost is small enough to run in-situ (e.g., Raspberry Pi + NVIDIA Jetson) on a low-cost quadrupedal robot platform (e.g., Petoi Bittle), which we aim to demonstrate as proof-of-concept.&lt;/p&gt;</description></item><item><title>EmbodEAI</title><link>http://biaslab.github.io/project/embodeai/</link><pubDate>Sun, 01 Jun 2025 00:00:00 +0000</pubDate><guid>http://biaslab.github.io/project/embodeai/</guid><description>&lt;p&gt;&lt;strong&gt;State-of-the-art (deep) reinforcement learning systems, for all their fantastic achievements, struggle in real-world tasks that are trivial for humans, especially those involving physical interactions. At the same time these systems consume excessive power for training and operation. That is because they are inefficient with their model representations (many parameters) and their data (big data and many trials for training). We see the (robot) body as an enormous computational resource that is poorly understood and largely underappreciated. But how to harness embodiment remains an important open question.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Signal processing by a physical body is extremely cheap and robust, but specialized; the brain is flexible, but more power-hungry. This results in a design trade-off that leads us to the following two multidisciplinary research questions for this project.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Which continuous-learning processing tasks should be delegated primarily to hardware (body) and which primarily to software (brain), and&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;how should the brain and the body be designed to capitalize on the potential of embodied intelligence?&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;We study how we can harness Embodiment as a resource in a next generation of EAI systems.&lt;/p&gt;
&lt;p&gt;Supervisory team:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Youri van de Burgt, Associate Professor, Microsystems, ME, Promotor&lt;/li&gt;
&lt;li&gt;Irene Kuling, Assistant Professor, Robotics, ME, co-promotor&lt;/li&gt;
&lt;li&gt;Thijs van de Laar, Assistant Professor, BIASlab, EE, co-promotor&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>FEP-Lab</title><link>http://biaslab.github.io/project/fep-lab/</link><pubDate>Sun, 01 Jun 2025 00:00:00 +0000</pubDate><guid>http://biaslab.github.io/project/fep-lab/</guid><description>&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="http://biaslab.github.io/img/projects/FEPlab.png" alt="https://icai.ai/lab/fep-lab/" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The FEPlab (Free Energy Principle Laboratory) is a collaboration between Eindhoven University of Technology (TU/e) and GN Hearing. The mission of the lab is to ameliorate the participation of hearing-impaired people in formal and informal social settings. The lab will focus its research on transferring a leading physics/neuroscience-based theory about computation in the brain, the Free Energy Principle (FEP), to practical use in human-centered agents such as hearing devices and VR technology.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;GN Hearing, which is a globally leading hearing aid manufacturer with a strong research team (of about 20 persons) in Eindhoven, and the TU/e have already been collaborating for many years in BIASlab, which is a research team at the Electrical Engineering department at TU/e. This collaboration has produced theoretical foundations for synthetic FEP-based AI agents. FEPlab has been set up in 2022 and is expected to run until mid-2027. During this time, the partners will continue to develop these FEP agents into a technology that is ready for deployment in the professional hearing device industry.&lt;/p&gt;
&lt;p&gt;FEPlab focuses on two Sustainable Development Goals: Goal 3, Good Health and Well-being, and Goal 5, Promote sustained, inclusive, and sustainable economic growth, full and productive employment, and decent work for all. Untreated hearing loss in the elderly increases the risk of developing dementia and Alzheimer’s disease (Ralli et al., 2019) as well as emotional and physical problems (Ciorba et al., 2012). Therefore, this research neatly ties into SDG3 Target 1: reducing premature mortality from non-communicable diseases. Moreover, hearing loss negatively impacts work participation (Svinndal et al., 2018). Hence, this research also ties into SDG8 Target 1: achieve higher levels of economic productivity through technology upgrading and innovation.&lt;/p&gt;
&lt;p&gt;The lab comprises experts from different fields of expertise such as Audiology, Autonomous Agents &amp;amp; Robotics, Decision Making, and Machine Learning to tackle the complex multidisciplinary challenges at hand. Socially aware AI and explainable AI are especially important in the lab’s research since the technology needs to be aware of the social context in which it is operating and be able to provide justification for its decisions and actions in a manner that is understandable by humans to ensure its safe use.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ralli, Massimo, et al. “Hearing loss and Alzheimer’s disease: A Review.” The international tinnitus journal 23.2 (2019): 79-85.&lt;/li&gt;
&lt;li&gt;Ciorba, Andrea, et al. “The impact of hearing loss on the quality of life of elderly adults.” Clinical interventions in aging 7 (2012): 159.&lt;/li&gt;
&lt;li&gt;Svinndal, Elisabeth Vigrestad, et al. “Hearing loss and work participation: a cross-sectional study in Norway.” International journal of audiology 57.9 (2018): 646-656.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The lab is part of the &lt;a href="https://icai.ai/lab/fep-lab/" target="_blank" rel="noopener"&gt;Innovation Center for Artificial Intelligence&lt;/a&gt;&lt;/p&gt;</description></item><item><title>FEPQuad</title><link>http://biaslab.github.io/project/fepquad/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>http://biaslab.github.io/project/fepquad/</guid><description>&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="http://biaslab.github.io/img/projects/FEPquad.png" alt="FEPquad" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;We will design an artificially intelligent autonomous system for quadrupedal robot locomotion, using a novel paradigm from theoretical neuroscience called Active Inference (AIF). “Active” refers to selecting actions that reduce uncertainty within the probabilistic model of the world and “inference” refers to the use of variational Bayesian inference to update beliefs over unobserved variables in the model (e.g. parameters, states, noise, controls). AIF is a novel perspective on neural information processing and is intended to model cognition, for instance how rats explore small mazes in search of food. But it can also serve as a design principle for artificially intelligent autonomous systems (agents). These can be applied to signal processing systems (e.g. adaptively calibrating hearing aids), control systems (e.g. identifying electro-mechanical systems) or robotics (e.g. learning to grasp). However, bringing AIF to engineering is far from trivial. There are many technical challenges, such as how to account for strong non-linearities, how to deal with high degrees of freedom of moving parts or how to include practical constraints to avoid breaking hardware.&lt;/p&gt;
&lt;p&gt;Why Active Inference? It represents a series of technical advantages over the current state-of-the-art in AI methodologies. Deep learning is a popular framework with impressive applications, but is not without its limitations. Firstly, it requires huge amounts of data to “discover” structure. AIF is a hybrid of model-driven and data-driven learning, which means it can rely on prior knowledge when data is scarce. Crucially, deep learning methods only perform well after an experienced designer has extensively tuned the network’s architecture, regularized its complexity and tried various optimizers. In AIF, there are fewer model parameters, regularization arises naturally through prior distributions and optimization is not an issue. To train deep neural networks, you need expensive hardware (GPU) with a sizeable carbon footprint. AIF agents require less computation power and are more suited to embedded electronics. Last but not least, when deep learning is applied to control (deep reinforcement learning), the engineer must design a “reward function” that indicates the value of actions. This function is hard to design, leading to misbehaving agents. AIF agents do not suffer from this problem because rewards arise implicitly from the probabilistic model. These properties are all nice, but the most important argument for AIF is that it represents a principled way to design intelligent systems: instead of hacking something together based on task-specific cost functions, we now have a first-principle-based framework of perception, decision-making, planning and action.&lt;/p&gt;
&lt;p&gt;Why quadrupedal walking robots? Because unlike wheeled robots, walkers can step over objects and climb stairs. Unlike drones, they can enter confined spaces and operate for extended periods of time. In theory, quadrupeds are highly agile. In practice, learning to walk is such a complex challenge that they often fail to live up to their potential. Modern AI has accelerated legged robotics to the point that quadrupeds now walk relatively smoothly. Companies such as Boston Dynamics, ANYbotics and Unitree are developing commercial products for semi-autonomous site inspection and maintenance. But their controllers still rely heavily on deep learning. Our AIF agent will make it much easier to teach legged robots to walk.&lt;/p&gt;
&lt;p&gt;How will the proposed agent work? AIF agents are based on a probabilistic graphical model expressing the dependence of observed and unknown variables through conditional distributions. For dynamical systems, there are leaf nodes representing initial conditions that are specified as “prior beliefs”. One can estimate the posterior distributions for the unknowns (e.g. states, parameters, noises) through Bayesian inference, usually in the form of a message-passing algorithm. Sometimes, the integrals involved are intractable. Variational inference solves this by approximating the posteriors with a simpler “recognition model”. AIF agents are essentially a form of variational Bayesian inference on probabilistic graphical models of dynamic systems that alternate between solving a signal processing (perception) and a control (action) problem to reach a goal.&lt;/p&gt;
&lt;p&gt;The position is supported by the Sectorplan Techniek of the Dutch Ministry of Education, Culture and Science and the Eindhoven Artificial Intelligence Systems Institute.&lt;/p&gt;
&lt;iframe width="560" height="315" src="https://www.youtube.com/embed/fSQYd6dfmWM?si=hbkHwyYANWeexy-7" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen&gt;&lt;/iframe&gt;</description></item><item><title>Auto-AR</title><link>http://biaslab.github.io/project/auto-ar/</link><pubDate>Fri, 01 Oct 2021 00:00:00 +0000</pubDate><guid>http://biaslab.github.io/project/auto-ar/</guid><description>&lt;p&gt;Automated Situated Design of Augmented Hearing Reality Algorithms&lt;/p&gt;</description></item><item><title>BayesBrain</title><link>http://biaslab.github.io/project/bayesbrain/</link><pubDate>Thu, 01 Apr 2021 00:00:00 +0000</pubDate><guid>http://biaslab.github.io/project/bayesbrain/</guid><description>&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="http://biaslab.github.io/img/projects/BayesBrain.jpg" alt="BayesBrain-scheme" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;Computation in biological brain tissue consumes several orders of magnitude less power than silicon-based systems. Motivated by this fact, this project aims to develop the world’s first hybrid neuro-in-silico Artificial Intelligence (AI) computer, introducing a fundamentally new paradigm of AI computing. In this high-risk high-gain project, we will combine an in-silico Bayesian control agent (BCA) with neural tissue hosted by a microfluidic Brain-on-Chip (BoC) that together form a hybrid learning system capable of solving real-world AI problems.&lt;/p&gt;
&lt;p&gt;All computation and communication inside and between the BCA and BoC will be governed by the Free Energy Principle, which is both the leading neuroscientific theory for describing biological neuronal processes and supports a variational Bayesian machine learning interpretation. We will start by developing a pure silicon-based BCA that learns to balance an inverted pendulum, implemented by free energy minimization on a factor graph. Next, we will replace successively larger parts of the factor graph with biological neural circuits of a microfluidic multi-compartment BoC device. The biological network will be trained by electrical stimulation orchestrated by the synthetic Bayesian agent. For the communication between these two units, we will design and realize a novel communication protocol making use of existing software being applied in readout and event sorting for Calcium imaging and multi-electrode array data, such as MEAViewer, CALIMA, NetCal and SpikeHunter. By upscaling the number of replaced sub-circuits, we aim to provide a proof-of-concept and to lay the basis for ultra-low power hybrid brain-on-chip AI computing.&lt;/p&gt;
&lt;p&gt;This position is supported by the Exploratory Multidiscplinary AI Research Program of the Eindhoven Artificial Intelligence Systems Institute.&lt;/p&gt;</description></item></channel></rss>