ELBOW ROOM

This is where I collect and write things, mostly on academe and artificial intelligence. Established in 2010, by Vaishak Belle.

  1. Last week, there was a twitter beef about when one is a reasonable expert in AI and when one’s criticisms / viewpoints should be taken seriously.


    Not that I have any skin in the game (of that exchange), but in principle, under what measures are you then allowed to discuss a discipline? How will we ever allow for cross disciplinary banter?


    There are very many papers from medical practitioners, lawyers and ethicists on the limitations of, say, explainable ai, and as far as I noticed, they are all valuable insights. One assumes they did the ground work and attempted to understand ai as best as they could. Otherwise why would they stake their professional reputation? After all computer scientists frequently borrow technical devices from philosophy and social sciences, and are now currently attempting to leverage insights from ethics. Sure, sometimes well intentioned criticisms can be misjudged because something fundamental is misunderstood, but that’s ok. As long as it’s not wild speculation and uncritical opining, I think every field should welcome outsider criticism and feedback. It is possible, in return, that experts from the field respond to such criticisms with well founded rebuttals or clarifications, and on it goes back and forth. And cross disciplinary science is established. During the 80s plenty of linguists, cognitive scientists, and philosophers engaged in discussions about ai and we were to a large extent wiser for it.


    P.S. One type of wild speculation that I have in mind is during the early days of AI, where untestable things were said based on meta physical argumentation. One particularly notorious example as far as I’m concerned is roger penrose’s emperor’s new mind. Claims just went all over the place in that one, and couldn’t really find any coherent argument. Likewise, books about “we will have superhuman intelligence in exactly X years” are ones I find to be idle speculation and ignore. It’s more pertinent to be worried about the misuse of dumb AI, drones, face recognition systems, automation and loss of jobs, among other things.

    P.P.S. One type of thing that I often see non-AI folks gloss over is precisely all the work done on non-ML AI such as automated planning, reasoning and search. They point to some narrow ML models, lump us all as pattern recognition experts, and ignore the significant conceptual work on knowledge representation and reasoning.