• Bye@lemmy.world
      link
      fedilink
      arrow-up
      18
      ·
      5 months ago

      I do, exclusively

      Getting rid of caffeine (decaf still has a little) has been amazing for me.

        • lobut@lemmy.ca
          link
          fedilink
          arrow-up
          5
          ·
          5 months ago

          I’m not the person you’re replying to but for me, I used to get random headaches and jitters and I feel more consistent now.

          The problem is the withdrawal period can be hard for some. It was for me, but overall worth it in the end.

          • Asafum@feddit.nl
            link
            fedilink
            arrow-up
            3
            ·
            5 months ago

            I have a “thermos” style bottle that’s probably 16oz that I drink throughout the day every day. Weekends I’ll drink more as I’m home and it’s readily available.

            It’s cold brew so it’s already cold for anyone disgusted by the “throughout the day” bit lol

    • UNWILLING_PARTICIPANT@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      5 months ago

      Not the same, but I switched to tea mostly for aesthetic reasons, and after a brief adjustment period, I’m finding it a lot more fun an varied than coffee drinking. And easier to find v low caffeine, or tasty 0 caffeine teas of as many varieties as you can imagine.

      I’ll still have a social coffee every now and then, but anyway I’d recommend it, at least to check out. It’s like discovering scotch after a lifetime of beer drinking.

      • Appoxo@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        3
        ·
        5 months ago

        Try eplaining tea to others though.
        Every time I am on-site I get asked for two options: Coffee or water.

          • Appoxo@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            5 months ago

            I assume your are either not interested in loose tea or not there yet.

            Once you reach temperature sensitive teas (like japanese greens) that are additionally sensitive to hard water it quickly becomes difficult to brew tea at work/not at home.

            Personally I started to bring a 400ml thermos (about my usual cup) and on some days my 1L thermos.
            Both my thermos keep a 70°C tea warm (probably 50°C) even until end of work and so temperature doesnt become an issue but instead oxidadation. Greens like to become a faint brown color and change their taste. Sometimes for the better, sometimes not.

    • underisk@lemmy.ml
      link
      fedilink
      arrow-up
      51
      arrow-down
      6
      ·
      5 months ago

      the comic is about using a machine learning algorithm instead of a hand-coded algorithm. not about using chatGPT to write a trivial program that no doubt exists a thousand times in the data it was trained on.

      • Honytawk@lemmy.zip
        link
        fedilink
        arrow-up
        27
        arrow-down
        7
        ·
        5 months ago

        The strengths of Machine Learning are in the extremely complex programs.

        Programs no junior dev would be able to accomplish.

        So if the post can misrepresent the issue, then the commenter can do so too.

        • underisk@lemmy.ml
          link
          fedilink
          arrow-up
          14
          ·
          5 months ago

          Yes that is what they are good at. But not as good as a deterministic algorithm that can do the same thing. You use machine learning when the problem is too complex to solve deterministically, and an approximate result is acceptable.

        • pearsaltchocolatebar@discuss.online
          link
          fedilink
          arrow-up
          11
          ·
          5 months ago

          Lol, no. ML is not capable of writing extremely complex code.

          It’s basically like having a bunch of junior devs cranking out code that they don’t really understand.

          ML for coding is only really good at providing basic removed code that is more time intensive than complex. And even that you have to check for hallucinations.

          • kurwa@lemmy.world
            link
            fedilink
            arrow-up
            6
            ·
            5 months ago

            To reiterate what the parent comment of the one you replied to said, this isn’t about chat GPT generating code, it’s about using ML to create a indeterministic algorithm, that’s why in the comic it’s only very close to 12 and not 12 exactly.

        • xmunk@sh.itjust.works
          link
          fedilink
          arrow-up
          8
          ·
          5 months ago

          I strongly disagree. ML is perfect for small bullshit like “What’s the area of a rectangle” - it falls on its face when asked:

          Can we build a website for our security paranoid client that wants the server to completely refuse to communicate with users that aren’t authenticated as being employees… Oh, and our CEO requested a password recovery option on the login prompt.

          • ulterno@lemmy.kde.social
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            5 months ago

            I got interested and asked ChatGPT. It gave a middle-management answer.
            Guess we know who’ll be the first to go.

        • Pelicanen@sopuli.xyz
          link
          fedilink
          arrow-up
          9
          arrow-down
          1
          ·
          5 months ago

          I think the exact opposite, ML is good for automating away the trivial, repetitive tasks that take time away from development but they have a harder time with making a coherent, maintainable architecture of interconnected modules.

          It is also good for data analysis, for example when the dynamics of a system are complex but you have a lot of data. In that context, the algorithm doesn’t have to infer a model that matches reality completely, just one that is close enough for the region of interest.

        • funkless_eck@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          5 months ago

          The biggest high level challenge in any tech org is security and there’s no way you can convince me that ML can successfully counter these challenges

          “oh but it will but it will!”

          when

          “in the future”

          how long in the future

          “When it can do it”

          how will we know it can do it

          “When it can do it”

          cool.

  • orca@orcas.enjoying.yachts
    link
    fedilink
    arrow-up
    40
    arrow-down
    2
    ·
    5 months ago

    Ahh the future of dev. Having to compete with AI and LLMs, while also being forced to hastily build apps that use those things, until those things can build the app themselves.

  • 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.social
    link
    fedilink
    arrow-up
    42
    arrow-down
    9
    ·
    5 months ago

    The sad thing is that no amount of mocking the current state of ML today will prevent it from taking all of our jobs tomorrow. Yes, there will be a phase where programmers, like myself, who refuse to use LLM as a tool to produce work faster will be pushed out by those that will work with LLMs. However, I console myself with the belief that this phase will last not even a full generation, and even those collaborative devs will find themselves made redundant, and we’ll reach the same end without me having to eliminate the one enjoyable part of my job. I do not want to be reduced to being only a debugger for something else’s code.

    Thing is, at the point AI becomes self-improving, the last bastion of human-led development will fall.

    I guess mocking and laughing now is about all we can do.

    • KevonLooney@lemm.ee
      link
      fedilink
      arrow-up
      17
      ·
      5 months ago

      at the point AI becomes self-improving

      This is not a foregone conclusion. Machines have mostly always been stronger and faster than humans, because humans are generally pretty weak and slow. Our strength is adaptability.

      As anyone with a computer knows, if one tiny thing goes wrong it messes up everything. They are not adaptable to change. Most jobs require people to be adaptable to tiny changes in their routine every day. That’s why you still can’t replace accountants with spreadsheets, even though they’ve existed in some form for 50 years.

      It’s just a tool. If you don’t want to use it, that’s kinda weird. You aren’t just “debugging” things. You use it as a junior developer who can do basic things.

    • Doc Avid Mornington@midwest.social
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      5 months ago

      Well, we could end capitalism, and demand that AI be applied to the betterment of humanity, rather than to increasing profits, enter a post-scarcity future, and then do whatever we want with our lives, rather than selling our time by the hour.

      • fidodo@lemmy.world
        cake
        link
        fedilink
        English
        arrow-up
        7
        ·
        5 months ago

        The only way I see that happening is if the entire economy collapses because nobody has jobs, which might actually happen pretty soon 🤷

  • MxM111@kbin.social
    link
    fedilink
    arrow-up
    29
    arrow-down
    1
    ·
    5 months ago

    Well, if training is included, then why it is not included for the developer? From his first days of his life?

      • thetreesaysbark@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        ·
        5 months ago

        Sort of… If the dev didn’t pay for their training, they wouldn’t need as big of a wage to pay off their training debt (the usual scenario I’d wager).

        So in a way the company is currently paying off the debt for the Devs training, most of the time.

    • ilinamorato@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      7
      ·
      5 months ago

      When did the training happen? The LLM is trained for the task starting when the task is assigned. The developer’s training has already completed, for this task at least.

      • Deceptichum@kbin.social
        link
        fedilink
        arrow-up
        14
        arrow-down
        1
        ·
        edit-2
        5 months ago

        No? The LLM was trained before you ever even interacted with it. They’re not going to train a model on the fly each time you want to use it, that’s fucking ridiculous.

        • ilinamorato@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          5 months ago

          That’s the joke that the comic is making. Whether or not it’s reflective of reality, they’re joking about a company training a new AI model to calculate the area of rectangles.

        • 0ops@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          5 months ago

          And even if they do need to train a model, transfer learning is often a viable shortcut

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    arrow-up
    14
    ·
    edit-2
    5 months ago

    Agreed. If you need to calculate rectangles ML is not the right tool. Now do the comparison for an image identifying program.

    If anyone’s looking for the magic dividing line, ML is a very inefficient way to do anything; but, it doesn’t require us to actually solve the problem, just have a bunch of examples. For very hard but commonplace problems this is still revolutionary.

    • Buttons@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 months ago

      I think the joke is that the Jr. Developer sits there looking at the screen, a picture of a cat appears, and the Jr. Developer types “cat” on the keyboard then presses enter. Boom, AI in action!

      The truth behind the joke is that many companies selling “AI” have lots of humans doing tasks like this behind the scene. “AI” is more likely to get VC money though, so it’s “AI”, I promise.

    • flashgnash@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      5 months ago

      I think it’s still faster than actual solutions in some cases, I’ve seen someone train an ML model to animate a cloak in a way that looks realistic based on an existing physics simulation of it and it cut the processing time down to a fraction

      I suppose that’s more because it’s not doing a full physics simulation it’s just parroting the cloak-specific physics it observed but still

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        5 months ago

        I suppose that’s more because it’s not doing a full physics simulation it’s just parroting the cloak-specific physics it observed but still

        This. To I’m sure to a sufficiently intelligent observer it would still look wrong. It’s just that we haven’t come up with a way to profitably ignore the unimportant details of the actual physics, relative to our visual perception.

        In the same vein, one of the big things I’m waiting on is somebody making a NN pixel shader. Even a modest network can achieve a photorealistic look very easily.

  • audiomodder@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    5 months ago

    Yea, but does the AI ask me why “x” doesn’t work as a multiplication operator 14 times while complaining about how this would be easier in Rust?

  • Medli@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    5 months ago

    To be fair the human had how many years of training more than the AI to be fit to even attempt to solve this problem.

    • TonyTonyChopper@mander.xyz
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      5 months ago

      the future unifying metric for productivity should be joules per line of code. If you cost more than a machine you get laid off

  • lugal@sopuli.xyz
    link
    fedilink
    arrow-up
    5
    arrow-down
    4
    ·
    5 months ago

    This is all funny and stuff but chatGPT knows how long the German Italian border is and I’m sure, most of you don’t

    • SquirtleHermit@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      5 months ago

      So I apparently have too much free time and wanted to check. So I asked ChatGPT how long the border was exactly, and it could only get an approximate guess, and it had to search using Bing to confirm.

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 months ago

        Google’s AI gives it as:

        The length of the German-Italian border depends on how you define the border. Here are two ways to consider it:

        Total land border: This includes the main border between the two countries, as well as the borders of enclaves and exclaves. This length is approximately 811 kilometers (504 miles).

        Land border excluding exclaves and enclaves: This only considers the main border between the two countries, neglecting the complicated enclaves and exclaves within each country’s territory. This length is approximately 756 kilometers (470 miles).

        It’s important to note that the presence of exclaves and enclaves creates some interesting situations where the border crosses back and forth within the same territory. Therefore, the definition of “border” can influence the total length reported.

      • lugal@sopuli.xyz
        link
        fedilink
        arrow-up
        3
        ·
        5 months ago

        That’s a number I never got. I got either 700 something km or 1000 something. It’s only sometimes that chatGPT realizes that there are Austria and Switzerland in between and there is no direct border

    • xmunk@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      5 months ago

      Nobody knows how long any border is if it adheres to any natural boundaries. The only borders we know precisely are post-colonial perfectly straight ones.

          • lugal@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 months ago

            I’ve tried, but chatGPT won’t give me an answer. So far, my personal record is Serbia - Iraq. If you find 2 countries that are further apart, yet chatGPT will give you a length of the border, feel free to share a screenshot!