ELBOW ROOM

This is where I collect and write things, mostly on academe and artificial intelligence. Established in 2010, by Vaishak Belle.

  1. looking forward to this: we are considering a cross-disciplinary approach towards a responsible AI ecosystem. Project led by Shannon and Ewa.

    image
  2. More often than not technical problems are not abstract enough for them to feel like a jigsaw puzzle in your head.

    Indeed as you get better at things, you branch out or do meta level thinking. So the abstract formulation might get lost.

    So when things do start to fall in place, you may find yourself in those rare moments when it all clicks together. Pieces fit. What you write flows. It’s the feeling I imagine a safe cracker gets when she pries open the lock.

    It’s satisfying — you can look at what you’ve created and know that it’s the labor of love. That’s when you know it’s all worth it. A job well done.

  3. brittleness of symbolic logic?

    There’s talk going around about the “brittleness” of symbolic AI. 

    I’ve used this qualifier too in some of my papers, but it’s worth nothing that there’s some nuance to this. Four dimensions are worth articulating, I think.

    One notion of brittleness might stem from classical logic: classical semantics requires that the consequences of a knowledge base are true in every model of the knowledge base (KB). So if there are “mistakes” in the KB, there’s no way of getting out of such mistakes in the consequences, they’ll embrace every such mistake and take it as categorical truth. There’s work on how to include possibly conflicting observations, eg belief revision, Valiant’s robust logics /PAC-semantics. 

    A second and related notion is about the consequent needing to be true in all of the KB’s models (worlds). There’s ample work on non-classical semantics (e.g., fuzzy logic, where the consequences can be mapped into the real line), and probabilistic-semantics (where we get a number corresponding to the ratio of the KB’s models where the consequent is true). 

    A third notion is when the knowledge is specified by an expert. There are various approaches to inducing symbolic structures partially or fully from data (e.g., bayesian program induction, statistical relational learning, inductive logic programming). 

    A final notion is when the predicates in the theory map on to concrete immutable concepts. Eg the definition of the number “6” and “Belgian waffles”, which might be best characterized from image training data. Approaches such as DeepProbLog allow the mixing of symbolic background knowledge with fuzzy/probabilistic/neural concepts. 

    Taking all these into account, the brittleness is a straw man argument. And moreover, when researchers develop meta-level results (e.g., decidability of fragment, semantics of new logics), they can do this without committing to where the knowledge came from, and so are still valid.

  4. Regarding the RSE’s magazine issue(s), I forgot to add, if you are in edinburgh, you can pop in to the RSE building and grab a hard copy. It includes a number of articles on AI and ethics.

  5. When you read through great material, there’s the moment when your mind feels full of creative enthusiasm, and you are eager to get started on your next project. I’d like to lock that feeling for a minute, an hour, a day, and come away feeling inspired. That’s the true power of books.

  6. I very much like the idea of blogging chapters of the technical books one is reading. Think it’s really useful to the community. I should try it sometime. Meanwhile for a very short intro to FOL, including the interpolation result, see Reiter’s 2001 knowledge in action book.

  7. (i) It is not clear how to attach probabilities to statements containing quantifiers in a way that corresponds to the amount of conviction people have.

    (ii) The information necessary to assign numerical probabilities is not ordinarily available. Therefore, a formalism that required numerical probabilities would be epistemologically inadequate.

    — McCarthy and Hayes.


    This argument from the 70s is a powerful one about the representational capabilities of languages for modelling reasoning systems that allow for qualitative as well as quantitative uncertainty.

    It’s one of the reasons I find the work on the probabilistic situation calculus by Bacchus, Halpern and Levesque so appealing.


    It allows a specification of belief that can be partial or incomplete, in keeping with whatever information is available about the application domain. It does not require specifying a prior distribution over some random variables, for example. Basically, some logical constraints are imposed on the initial state of belief. These constraints may be compatible with one or very many initial distributions and sets of independence assumptions. All the properties of belief will then follow at a corresponding level of specificity.

  8. ACL-2023 AI assisted writing policy

    Here is our take on some cases frequently discussed in social media recently:

    Assistance purely with the language of the paper. When generative models are used for paraphrasing or polishing the author’s original content, rather than for suggesting new content - they are similar to tools like Grammarly, spell checkers, dictionary and synonym tools, which have all been perfectly acceptable for years. If the authors are not sufficiently fluent to notice when the generated output does not match their intended ideas, using such tools without further checking could yield worse results than simpler-but-more-accurate English. The use of tools that only assist with language, like Grammarly or spell checkers, does not need to be disclosed.

  9. This thread on the dichotomy manifesting in the tech world, and how it challenges non-quantitative ways of approaching problems and the impact it has on the products we encounter is worth noting.

    It’s precisely why our projects on trustworthy autonomous systems, among others, explores issues from computer science, philosophy and law.

  10. Logicians should reclaim the word deep. “Deep” in the sense of allowing arbitrarily many steps in proofs systems until failure or success, as in classical logic for example. So that’s deep reasoning.

    Shallow reasoning is when the system only permits a few steps of proofs, as seen in bounded proof systems in some neural theorem provers.

  11. My current blogging pipeline is: post things on email, which via ifttt, gets to tumblr, which via ifttt, gets to Twitter. By means of moa, things get shared on mastodon. The disadvantage is that these social media posts have truncated texts, with links. I’m all for having traffic but see it as unnecessary. I’m happy for the content to be permanently on tumblr, and folks only see the text on Twitter and Mast.

    However, Twitter character limit is annoying as I’ve blogged here before and although mast is better, it doesn’t support hyperlinks eg markdown.

    If I were to force myself to only use one link per article, which I could in majority of the cases, I’d prefer posting directly on mast which then would go to Twitter - tumblr would only be a backup and not needed to be visited except to look through archives. I might consider this in a while once I see what tumblr’s activity pub integration looks like - if it’s not satisfactory then I’ll try switching to mast as the main posting platform.

    (Alternatively, if they increase the character limit on Twitter to 4k, I would post solely on Twitter, which then gets sent to mast and tumblr. But there’s been an outage since yesterday on third party clients which some suspect is deliberate.)

  12. With all of these recent events with folks not accepting democratic elections, it’s worth remembering:


    Authoritarian governments will do anything to stay in power - shame and maim their own peoples. So the next time you vote for the new guy who promises order, whilst scapegoating the powerless and the different as a proxy for his mediocrity, know he’ll shame and maim you next should he find that it benefits his agenda.

  13. There are many dimensions to explanations in AI. In the case of machine learning models, we might be interested in understanding how blackbox systems work, which might range anything from understanding the influence of data points to looking critically at which features influence the classification. When we want to go beyond prediction, it becomes much more important to understand stakeholder engagement and handle input from the user. Follow up questions need to be entertained. Some kind of counterfactual argument, contrastive argument or explanations involving repeated back and forth might be needed. Ideally the system offers the simplest explanation and expands this when clarification is requested.

  14. Some of the writing in ML comes off as so dry - ABC is related work, we improve on the architecture of X by adding Y layers, and we now show that we improve on Z’s results by P%.

    Where’s the fire and passion and drive, folks? Or is adding poetry to hyper-parameter tuning reports just hyper-lame?

  15. What if deep learning had been named shallow learning, because it’s only deep in the number of layers and perhaps implied abstraction but not much more? How many folks would proudly carry the label?