University logo SPS / BIASlab
  • Home
  • Research
    • Projects
    • Publications
    • Blog
  • Teaching
    • 5SSD0 (2024/2025)
  • Software
    • RxInfer.jl
    • ForneyLab.jl
    • GraphPPL.jl
    • ReactiveMP.jl
    • Rocket.jl
  • Team
  • Join us
    • Starter's guide
    • Open projects
  • Contact

Mykola Lukashchuk

PhD candidate, TU Eindhoven

Interests

  • Probabilistic Inference
  • Approximate Inference
  • Active Inference

Education

  • MSc in Statistics, 2021

    Taras Shevchenko National University of Kyiv

  • MSc in Computer Science, 2021

    Instituto Politécnico Nacional

  • BSc in Statistics, 2019

    Taras Shevchenko National University of Kyiv

Biography

A team of researchers at a hearing aid company has developed a new algorithm for improving sound quality in noisy environments. However, before the algorithm can be used in hearing aids, it needs to be optimized to run faster and use less power. To address this issue, a separate team of engineers begins to work on adapting the algorithm to the production environment: they implement strategies for efficient execution and power management. In a typical iterative development cycle procedure, they then pass the updated algorithm to the researcher team, adding another iteration in the cycle. Finally, after a lot of time spent in this cycle and many iterations, both teams deliver the product.

Unfortunately, some customers testing the product - dog handlers - are complaining: they cannot hear their dogs! Why? The current algorithm considered barking, an essential sound for dog handlers, as noise!

To build a better algorithm that can deal with this unexpected scenario, both teams need to spend even more time in the development cycle. Evidently, convergence to a good algorithm is very slow in this development cycle: the need for two separate teams slows down the research process, and another iteration in the cycle is required whenever something unexpected happens.

My general long-term research goal is to resolve this two-separate-team problem (at least partially). Specifically, I aim to develop a flexible computational engine that can trade precision for efficiency, and which will consequently generate new versions of an algorithm that are less precise but still usable for testing. In the aforementioned example, this can help the teams make progress more quickly and avoid having to restart the entire development cycle when unexpected issues arise, such as the barking being classified as noise in the initial algorithm.

To implement this engine, I want to treat message computation as a Bayesian procedure inside message-passing inference. In message-passing inference, all our computations are local and can be kept for faster inference. If one needs to be less precise, then approximations of the exact result can be computed via approximate inference methods. Implementations of all inference methods within this engine will be demonstrated within the RxInfer ecosystem.

Previously, I worked as a DevOps & machine learning engineer and natural language processing engineer.

Publications

ExponentialFamilyManifolds.jl: Representing exponential families as Riemannian manifolds

Mykola Lukashchuk, Dmitry V. Bagaev, Albert Podusenko, Ismail Senoz, Bert de Vries
JuliaCon 2024
April, 2025
Details PDF Code JuliaCon 2024 Proceedings

Riemannian Black Box Variational Inference

Mykola Lukashchuk, Wouter Nuijten, Dmitry V. Bagaev, Ismail Senoz, Bert de Vries
NeurIPS 2024 BDU
October, 2024
Details PDF Code OpenReview

Q-conjugate Message Passing for Efficient Bayesian Inference

Mykola Lukashchuk, Ismail Senoz, Bert de Vries
PGM 2024
September, 2024
Details PDF PMLR

Efficient Bayesian Inference by Conjugate-computation Variational Message Passing

Mykola Lukashchuk, Ismail Senoz, Bert de Vries
MLSP 2023
September, 2023
Details PDF Code

© BIASlab, 2024 · Partially powered by the Academic theme for Hugo.