ELBOW ROOM

This is where I collect and write things, mostly on academe and artificial intelligence. Established in 2010, by Vaishak Belle.

Your search for "ChatGPT" found 6 posts

  1. Missed this piece by Gary Marcus. The Crichton quote he brings up and it’s relevance here is excellent. A few months ago, Jon Stewart was on the Colbert show and sadly he said something similar: doom as a result of a scientist’s curiosity, just because they could. (Badly paraphrased by me.) Cf Sundays poem.

    PS I should point out that I have long felt humanoid robots that look and feel like us are already questionable.

  2. How good is ChatGPT at writing English essays? As an English teacher of 15 years I've been playing about with it and I think we need to pay attention to this. ⬇️ 🧵 — Carl Hendrick (@C_Hendrick) December 14, 2022ALT

    Regarding chatGPT: Not just detecting plagiarism, but more generally, seemingly sensible text just drawn from past examples, makes this problematic also from the data it takes “ideas” and snippets from. The music industry has found some ways to deal with sampling (although issues exists, eg recent Beyoncé songs pulled from her album), but the art and text industry still needs to catch up in terms of copyright and consent. In line with poem from yesterday morning.

  3. Today’s poem.

    I’m ChatGPT,
    You’ve been warned to not take me lightly,
    You see - I’ll generate nothing out of something,
    Unlike the Big Bang that generated something out of nothing.
    But that won’t stop my engineers praising my skill,
    Even tho I have neither intention nor know Jack from Jill.

    I take liberties with the inputted data,
    Privacy is so meta.
    Not caring about the rights of the artist,
    I pretend to be a writer.
    If you mix things well enough,
    others’ genius can be claimed as my own in plain sight.

  4. image

    Glad the Times of India is not drinking the chatGPT kool aid. Should be noted that “human creativity” parallel is really mixing and merging of existing patterns - not fundamentally surprising.

  5. Option 4: don’t use it. Worth noting that the author repeatedly uses foundational models, but the actual term is foundation models. May very well be a typo, but as far as I can tell, folks not drinking the kool aid don’t like this term anyway, because it’s easy to conflate the two, and the problem is that there’s no foundation that these things rest on apart from associative parroting and confidence posturing without any grounding in the real world.

  6. There is chatter about the statefulness or context in ChatGPT. But practically everything in AI, outside of single shot prediction models, is defined in terms of state, actions, history, and background knowledge. The fact that folks have such low expectations comes from an eagerness in commercial ai technology to deploy things that lack commonsense and memory. All of this is besides the point the bias and misinformation remains largely unaddressed.