ELBOW ROOM

This is where I collect and write things, mostly on art and artificial intelligence. Established in 2010, by Vaishak Belle.

  1. image

    Michael raises a good point. Classic ILP does have a scalability problem, perhaps analogous to how the classic program synthesis problem is computationally intractable.

    However, a great many newer directions are being considered. E.g., see neural program synthesis, sketching, neural ILP by Evans et al, and other dimensions to ILP as identified in the JAIR survey paper by Andrew and Sebastijan. And of course, there is also the incorporation of background knowledge whilst training neural networks (e.g., semantic loss, multiplexnet). The point, generally, is that the incorporation of background knowledge whilst learning and reasoning about examples is a fundamental paradigm that is often ignored.

    See also recent work on implicit learning with PAC semantics, following the thread of robust logics by Valiant.