This is where I collect and write things, mostly on academe and artificial intelligence. Established in 2010, by Vaishak Belle.
After the calm of the holidays, I am pleased to be back at work these weeks and I’m thinking - nothing beats the feeling of having a morning espresso in the gorgeous informatics forum, coupled with a few good AI papers. Currently going through some recent work on actual causality, will have more to say about that shortly. I’m looking at a slight variant of the standard version - see screenshot taken from Joseph Halpern’s webpage
The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.
― Isaac Asimov
Best thing about AI is all the risks it poses. Galactica mulls.
Just noticed: If we didn’t know it was parroting and grabbing text from sources, you’d think there’s some self-critical analysis on the last two bullet points.
The tech-bro trick: introduce something that is gimmicky, but is wildly irresponsible. Don’t bother looking at ethical or legal ramifications. Some folks will defend the “cool tool”, otherwise will fear AI further. Point is: you are in the news. Invite more investors.
An interesting follow up thought regarding the Searle Chinese room remark is that what if it’s not a look up table in the traditional sense, but rather a distribution, which is a more compact representation of the data. Then this means that the system utters responses based on tokens and prompts in the user’s interaction. Not dissimilar to chatGPT.
Such a statement could be seen as not having an “understanding” of the language. However, precisely because concepts and terms are not grounded in reality, they’d be prone to make mistakes lots of time. Uttering plausible responses but also nonsense a lot of the time. Not unlike chatGPT.
There was a twitter discussion the other day about Searle’s Chinese room argument. My position, just to avoid ambiguity, is that it is impossible to create a look up system of the sort he suggests. Language allows for be very many compositions so his argument rests on infeasible computational assumptions. See also “Is it enough to get the behaviour right?” by Levesque.
He thought Tolkein was terribly negative in the Lord of the Rings. He really believed that Orcs were salvagable, because John was an optimist. He believed you really could get them back. He felt the elves were very much like environmentalists who didn’t like change. He hated to see the waste of technology.
The RING was technology to him and so he has this sequel where they divert the river into Mount Doom they retrieve the ring, and of course the properties of the ring are separable, because it is a process similar to chromography where you put on a gold bar the various properties like invisibility and evil, will migtrate at various different speeds. John was nothing if NOT creative; and there is fragments of this lying around if you can actually dig it up and read it but I think that shows that in everything John was an eternal optimist and as such I am sure that right now he is expecting you, having heard this to say “YES, I really should go back and do that logical AI” !
— Tom Costello on John McCarthy
The music industry has found some ways to deal with sampling wrt copyright, but the art and text industry still needs to catch up in terms of copyright and consent. Until then, tools like chat GPT will sadly get a free pass in terms of “creative output.” It just looks creative but it’s really plagiarism.
I reject the contention that an important theoretical difference exists between formal and natural languages. — Richard Montague
(‘English as a formal language’, 1968)
Academics are so committed to their work that after an intense end of semester in December, they decide to spend their holiday break writing their next paper. Why else do we have submission deadlines in early January?
The hoarders of toys of the 90s ended up with collectors items of today.
The hoarders of NFTs of today will end up being just hoarders of non-pronounceable digital thingamajigs 30 years from now.
You can know the name of a bird in all the languages of the world, but when you’re finished, you’ll know absolutely nothing whatever about the bird… So let’s look at the bird and see what it’s doing — that’s what counts. I learned very early the difference between knowing the name of something and knowing something.
― Richard P. Feynman
Apt quote also for dumb language models I think.
If (language) models do not have an account of actions, effects, causes and ramifications, how can they be expected to understand the impact of their mutterings? We are not holding the correct party to blame.
I notice with the prevalence of Instagram and the like for the last decade is that although the cliche is that of a picture being worth a thousand words, it doesn’t mean we are almost categorically unable to describe what we experience. Yet that’s what’s happening. People frequently caption their pictures with “I’m at a loss for words” because that’s exactly how they find themselves. The truth is that with 140 characters and photo shares, we’ve become somewhat lazy with our written word. And that’s why we need to bring blogging back.
The desperate attempt by some academics and companies to promote language models as the key driver of AI progress is yet another instance of silver-bulletism. We need these but coupled with notions of agency, beliefs and goals to really have some reliable thing we can communicate confidently with. How can a system without a clear specification of what it’s after figure out what you are after?