Bayesian inference can be extended to probability distributions defined in terms of their inverse distribution function, i.e. their quantile function. This applies to both prior and likelihood. *Quantile-based likelihood* is useful in models with sampling distributions which lack an explicit probability density function. *Quantile-based prior* allows for flexible distributions to express expert knowledge. The principle of *quantile-based* Bayesian inference is demonstrated in the univariate setting with a Govindarajulu likelihood, as well as in a *parametric quantile regression*, where the error term is described by a quantile function of a Flattened Skew-Logistic distribution.