LTL learning on GPUs

**Abstract.**
Linear temporal logic (LTL) is widely used in industrial
verification. LTL formulae can be learned from
traces. Scaling LTL formula learning is an open problem. We
implement the first GPU-based LTL learner using a novel form of
enumerative program synthesis. The learner is sound and
complete. Our benchmarks indicate that it handles traces at least
2048 times more numerous, and on average at least 46 times faster
than existing state-of-the-art learners. This is achieved with, among others,
novel branch-free LTL semantics that has *O(log n)* time
complexity, where *n* is trace length, while previous
implementations are *O(n ^{2})* or worse (assuming bitwise boolean
operations and shifts by powers of 2 have unit costs—a realistic
assumption on modern processors).

**Downloads:** Short version (draft). Implementation (Github). BibTeX