ELBOW ROOM

This is where I collect and write things, mostly on academe and artificial intelligence. Established in 2010, by Vaishak Belle.

  1. Neuro-symbolic AI for Agent and Multi-Agent systems [NeSyMAS] Workshop.

    We are excited to be sending out the call for the Neuro-symbolic AI for Agent and Multi-Agent systems workshop at AAMAS-2023. Details below.

    CALL FOR PAPERS: Neuro-symbolic AI for Agent and Multi-Agent systems [NeSyMAS] Workshop

    [part of AAMAS 2023; London, UK; 29th May-2nd June 2023]

    Paper submission link: https://easychair.org/conferences/?conf=nesymas2023

    AI has vast potential, some of which has been realised by developments in deep learning methods. However, it has become clear that these approaches have reached an impasse and that such “sub-symbolic” or “neuro-inspired” techniques only work well for certain classes of problem and are generally opaque to both analysis and understanding. “Symbolic” AI techniques, based on rules, logic and reasoning, while not as efficient as “sub-symbolic” approaches, have better behaviour in terms of transparency, explainability, verifiability and, indeed, trustworthiness. A new direction described as “neuro-symbolic” AI combines the efficiency of “sub-symbolic” AI with he transparency of “symbolic” AI. This combination potentially provides a new wave of AI systems that are both interpretable and elaboration tolerant and can integrate reasoning and learning in avery general way.

    Though there is work on neuro-symbolic AI for competing with classical ML models, such as its use of label-free supervision and graph embeddings, there is much less on the use for agent modelling or multi-agent systems. Especially in a multi-agent context, the use of symbolic models for mental state reasoning together with low-level perception patterns or formation of reasoning-capable representations from subsymbolic data, all represent promising areas where MAS offers a unique perspective.

    This workshop’s aim is thus to assemble leading-edge work in which neuro-symbolic AI approaches and MAS interact.

    TOPICS. Topics of interest include, but are not limited to, the following: Explicit agency in neuro-symbolic multi-agent systems Neuro-symbolic Reinforcement Learning Neuro-symbolic robotics and planning Mental models and epistemic logics for MAS Multiagency flavours Symbolic knowledge representations for subsymbolic MAS Neural-symbolic multi-agent systems Hybrid agent architectures Formal analysis of neural-symbolic multi-agent systems

    SUBMISSION. We welcome unpublished technical papers of up to 8 pages, and short (2-4 pages) position papers. Papers should be written in English, be prepared for single-blind reviewing, be submitted as a PDF document, and conform to the formatting guidelines of AAMAS 2023: https://aamas2023.soton.ac.uk/wp-content/uploads/sites/443/2022/06/AAMAS-2023-Formatting-Instructions.zip

    Papers selected for presentation at the workshop will be included in the workshop’s proceedings as open access publications, tentatively in CEUR (https://ceur-ws.org/) or EPTCS (https://www.eptcs.org/).

    Please use the following link to submit your paper: https://easychair.org/conferences/?conf=nesymas2023

    DEADLINES. Important dates [All dates are 23:59 AoE] Paper submission deadline: 13 March 2023 Paper acceptance notification: 17 April 2023 Camera-ready deadline: 15 May 2023 Workshop: 29 or 30 May, 2023

    Organising Committee Vaishak Belle, University of Edinburgh, UK Michael Fisher, University of Manchester, UK Xiaowei Huang, University of Liverpool, UK Masoumeh Mansouri, University of Birmingham, UK Albert Meroño-Peñuela, King’s College London, UK Sriraam Natarajan, UT Dallas, USA Efi Tsamoura, Samsung Cambridge, UK

    This workshop is organised by the Interest Group in Neuro-Symbolic AI of The Alan Turing Institute. You can find more information about us and how to join the Interest Group on our website, https://www.turing.ac.uk/research/interest-groups/neuro-symbolic-ai

  2. “I find social media to be a soul sucking void of meaningless affirmation.” — Wednesday, Addams family

  3. The start of the year seems to always bring a whole bunch of reviewing (papers, grants, etc). On the one hand, it’s a good way to get your brain fired up, by reading all the excellent work others are doing. But on the other, there is so much interesting work, of so many different kinds, the context switch is tough on your concentration, especially as you attempt to stumble back to work-mode.

  4. We have a thread on mastodon on when AI art can be considered original in its own right. Feel free to comment.

    See also a previous re-blog.

  5. Yet again Charles has a nice post about research and academe.

    I can’t say I have found an absolutely fool proof ways to instil joy given the competitive nature of the field, but I’ve been thinking about this too a lot over the last year, on how manage expectations and have fun. For myself reflecting on my past work, blogging regularly on what excites me, and thinking carefully through my ideas to see what type of work and what kind of results are fun to prove and work on is a means to keep me true to myself.

  6. I haven’t laboured to read through too much of that document, but the proposition vs judgement thing is precisely where I would get stuck. I’ve spoken to many colleagues about their work on type theory, and I recognise it plays a major role in the foundations of programming languages. I need to find an accessible starting point from sorted first order logic. (And I wouldn’t trust chatGPT for something like this unless I’m sure there’s a couple of documents it had access to and can summarise.)

  7. pdlcomics: “ideas ” There’s a special place for bad ideas, and it’s a gift that keeps on giving. Thankfully I don’t have the energy to run with every idea and see where it lands me.

    pdlcomics:

    ideas

    There’s a special place for bad ideas, and it’s a gift that keeps on giving. Thankfully I don’t have the energy to run with every idea and see where it lands me.

  8. What’s the role of semantics? So that these artifacts - tokens, symbols, programs and their compositions - are understood as well-defined expressions. And so that their properties can be studied. Formally.

  9. Graphs, tokens, symbols, assertions, programs — we understand that they are needed for modelling context and common sense in AI. But they are logical artifacts in different guises.

  10. Connecting dots based on first and last names is superficial, of course, but this post is such a joy to read, and perhaps deliberately rambling.

  11. image

    After the calm of the holidays, I am pleased to be back at work these weeks and I’m thinking - nothing beats the feeling of having a morning espresso in the gorgeous informatics forum, coupled with a few good AI papers. Currently going through some recent work on actual causality, will have more to say about that shortly. I’m looking at a slight variant of the standard version - see screenshot taken from Joseph Halpern’s webpage


    image
  12. The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.
    ― Isaac Asimov

  13. elbowroom:

    image

    Best thing about AI is all the risks it poses. Galactica mulls.

    Just noticed: If we didn’t know it was parroting and grabbing text from sources, you’d think there’s some self-critical analysis on the last two bullet points.

  14. The tech-bro trick: introduce something that is gimmicky, but is wildly irresponsible. Don’t bother looking at ethical or legal ramifications. Some folks will defend the “cool tool”, otherwise will fear AI further. Point is: you are in the news. Invite more investors.

  15. An interesting follow up thought regarding the Searle Chinese room remark is that what if it’s not a look up table in the traditional sense, but rather a distribution, which is a more compact representation of the data. Then this means that the system utters responses based on tokens and prompts in the user’s interaction. Not dissimilar to chatGPT.

    Such a statement could be seen as not having an “understanding” of the language. However, precisely because concepts and terms are not grounded in reality, they’d be prone to make mistakes lots of time. Uttering plausible responses but also nonsense a lot of the time. Not unlike chatGPT.

Previous