The advent of deep learning is opening novel avenues and possibilities for progress in automated theorem proving, as evidenced by achievements like DeepHOL and HOList, a neural theorem proving system paired with a general software framework for automated proof search.
My master thesis aims to harness and adapt these two systems in order to investigate whether certain mathematical calculi are more amenable to deep learning than other exactly equivalent ones. The modal non-associative Lambek calculus is useful in this regard because its focused and unfocused forms are logically equivalent while differing significantly on how they encode their derivations in their sequents. By comparing the performance of each on a custom-made sequent dataset we can glean useful information about how this calculus interacts with a model trained with deep learning.
Official report (pending)