To accommodate automatic (Bayesian) inference in a large class of models, probabilistic programming systems usually rely on inference methods that require no manual derivations. Most of these methods can be categorized into two families: (i) MCMC methods, which generate samples from the exact posterior distribution, and (ii) variational methods, which aim to optimize the parameters of a tractable approximation to the true posterior. To avoid modelspecific derivations, variational methods in this setting are often based on stochastic optimization, yielding algorithms such as blackbox variational inference (BBVI) and automatic differentiation variational inference (ADVI). While these ‘derivation-free’ algorithms can perform inference in a wide range of models, their generality comes at the price of significant computational load. In some cases, this price is inherent to the model at hand: we might simply not be able to construct an algorithm that can perform inference at a significantly lower computational cost. However, there is also a large family of practical models (like the random walk model we will discuss in this paper) for which more efficient inference algorithms can be devised by exploiting model-specific structures and properties. For example, an efficient algorithm might leverage closed-form solutions to integrals or local approximations to speed up inference. Ideally, such an efficient algorithm would also be generated automatically by a probabilistic programming framework, yielding a faster algorithm without additional manual work.