Whereas graphical models describe how different quantities are correlated based on observations of the world, causal models stipulate what the causal relationships between quantities are. By incorporating this additional specification of causality, these models allow data scientists to reason about what would occur in hypothetical situations, outside the realm of anything previously observed.
One issue with classical Bayesian models is that they simplify the world into a system with a small number of parameters so that the inference procedure is tractable (solvable). Recent advancements in neural networks provided the tools necessary to overcome this, allowing for Bayesian models with a large number of parameters.
Nonparametric Bayesian models provide another alternative: they have an infinite number of parameters, yet remain relatively tractable.
It can be difficult to precisely formulate Bayesian inference when the problem itself is ill-defined. In some cases one wants to flag “anomalous” occurrences or to detect when a “change” of some kind has occurred. In such cases there are general techniques that can be applied, but often the proper definition of an anomaly or a change must be tailored to a specific domain.