• 0 Posts
  • 84 Comments
Joined 8 months ago
cake
Cake day: November 19th, 2023

help-circle

  • Those are trained attack corgis. They may look cute, but their itty bitty widdle teefies can rip apart your throat if you so much as look at them wrong. When you’re watching their little fluffy butts when they walk, they just see you as a target. Just today’s hit. One signal, one word - and it’s over. You’ve been mauled to death by adorable attack sausages.






  • Like today’s computer scientists, early biologists sucked at inventing new words, and simply reused existing ones. “Berry” in common language is a small, usually sweet and edible, fruit. Strawberries, blueberries, blackberries and raspberries are all berries.

    Then biologists came along and decided, actually, strawberries, raspberries and blackberries are out, but watermelon and bananas are in, because the size of the fruit doesn’t matter, only the placement of the seeds decides whether something is a proper, scientific berry.

    A similar thing has happened with “fruit” and “vegetable”, where scientific fruits include cucumbers, eggplants, and pumpkins. Luckily, all three of these are also berries.

    I say we ignore them, and use words to mean sensible things.


  • What we have done is invented massive, automatic, no holds barred pattern recognition machines. LLMs use detected patterns in text to respond to questions. Image recognition is pattern recognition, with some of those patterns named things (like “cat”, or “book”). Image generation is a little different, but basically just flips the image recognition on its head, and edits images to look more like the patterns that it was taught to recognize.

    This can all do some cool stuff. There are some very helpful outcomes. It’s also (automatically, ruthlessly, and unknowingly) internalizing biases, preferences, attitudes and behaviors from the billion plus humans on the internet, and perpetuating them in all sorts of ways, some of which we don’t even know to look for.

    This makes its potential applications in medicine rather terrifying. Do thousands of doctors all think women are lying about their symptoms? Well, now your AI does too. Do thousands of doctors suggest more expensive treatments for some groups, and less expensive for others? AI can find that pattern.

    This is also true in law (I know there’s supposed to be no systemic bias in our court systems, but AI can find those patterns, too), engineering (any guesses how human engineers change their safety practices based on the area a bridge or dam will be installed in? AI will find out for us), etc, etc.

    The thing that makes AI bad for some use cases is that it never knows which patterns it is supposed to find, and which ones it isn’t supposed to find. Until we have better tools to tell it not to notice some of these things, and to scrub away a lot of the randomness that’s left behind inside popular models, there’s severe constraints on what it should be doing.