• 1 Post
  • 76 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle




  • Imagine you were asked to start speaking a new language, eg Chinese. Your brain happens to work quite differently to the rest of us. You have immense capabilities for memorization and computation but not much else. You can’t really learn Chinese with this kind of mind, but you have an idea that plays right into your strengths. You will listen to millions of conversations by real Chinese speakers and mimic their patterns. You make notes like “when one person says A, the most common response by the other person is B”, or “most often after someone says X, they follow it up with Y”. So you go into conversations with Chinese speakers and just perform these patterns. It’s all just sounds to you. You don’t recognize words and you can’t even tell from context what’s happening. If you do that well enough you are technically speaking Chinese but you will never have any intent or understanding behind what you say. That’s basically LLMs.


  • Just because something is available to view online does not mean you can do anything you want with it. Most content is automatically protected by copyright. You can use it in ways that would otherwise by illegal only if you are explicitly granted permission to do so.

    Specifically, Stack Overflow licenses any content you contribute under the CC-BY-SA 4.0 (older content is covered by other licenses that I omit for simplicity). If you read the license you will note two restrictions: attribution and “share-alike”. So if you take someone’s answer, including the code snippets, and include it in something you make, even if you change it to an extent, you have to attribute it to the original source and you have to share it with the same license. You could theoretically mirror the entire SO site’s content, as long as you used the same licenses for all of it.

    So far AI companies have simply scraped everything and argued that they don’t have to respect the original license. They argue that it is “fair use” because AI is “transformative use”. If you look at the historical usage of “transformative use” in copyright cases, their case is kind of bullshit actually. But regardless of whether it will hold up in court (and whether it should hold up in court), the reality is that AI companies are going to use everybody’s content in ways that they have not been given permission to do so.

    So for now it doesn’t matter whether our content is centralized or federated. It doesn’t matter whether SO has a deal with OpeanAI or not. SO content was almost certainly already used for ChatGPT. If you split it into 100s of small sites on the fediverse it would still be part of ChatGPT. As long as it’s easy to access, they will use it. Allegedly they also use torrents for input data so even if it’s not publicly viewable it’s not safe. If/when AI data sourcing is regulated and the “transformative use” argument fails in court and if the fines are big enough for the regulation to actually work, then sure the situation described in the OP will matter. But we’ll have to see if that ever happens. I’m not holding my breath, honestly.





  • Humans are not generally allowed to do what AI is doing! You talk about copying someone else’s “style” because you know that “style” is not protected by copyright, but that is a false equivalence. An AI is not copying “style”, but rather every discernible pattern of its input. It is just as likely to copy Walt Disney’s drawing style as it is to copy the design of Mickey Mouse. We’ve seen countless examples of AI’s copying characters, verbatim passages of texts and snippets of code. Imagine if a person copied Mickey Mouse’s character design and they got sued for copyright infringement. Then they go to court and their defense was that they downloaded copies of the original works without permission and studied them for the sole purpose of imitating them. They would be admitting that every perceived similarity is intentional. Do you think they would not be found guilty of copyright infringement? And AI is this example taken to the extreme. It’s not just creating something similar, it is by design trying to maximize the similarity of its output to its training data. It is being the least creative that is mathematically possible. The AI’s only trick is that it threw so many stuff into its mixer of training data that you can’t generally trace the output to a specific input. But the math is clear. And while its obvious that no sane person will use a copy of Mickey Mouse just because an AI produced it, the same cannot be said for characters of lesser known works, passages from obscure books, and code snippets from small free software projects.

    In addition to the above, we allow humans to engage in potentially harmful behavior for various reasons that do not apply to AIs.

    • “Innocent until proven guilty” is fundamental to our justice systems. The same does not apply to inanimate objects. Eg a firearm is restricted because of the danger it poses even if it has not been used to shoot someone. A person is only liable for the damage they have caused, never their potential to cause it.
    • We care about peoples’ well-being. We would not ban people from enjoying art just because they might copy it because that would be sacrificing too much. However, no harm is done to an AI when it is prevented from being trained, because an AI is not a person with feelings.
    • Human behavior is complex and hard to control. A person might unintentionally copy protected elements of works when being influenced by them, but that’s hard to tell in most cases. An AI has the sole purpose of copying patterns with no other input.

    For all of the above reasons, we choose to err on the side of caution when restricting human behavior, but we have no reason to do the same for AIs, or anything inanimate.

    In summary, we do not allow humans to do what AIs are doing now and even if we did, that would not be a good argument against AI regulation.



  • I have my own backup of the git repo and I downloaded this to compare and make sure it’s not some modified (potentially malicious) copy. The most recent commit on my copy of master was dc94882c9062ab88d3d5de35dcb8731111baaea2 (4 commits behind OP’s copy). I can verify:

    • that the history up to that commit is identical in both copies
    • after that commit, OP’s copy only has changes to translation files which are functionally insignificant

    So this does look to be a legitimate copy of the source code as it appeared on github!

    Clarifications:

    • This was just a random check, I do not have any reason to be suspicious of OP personally
    • I did not check branches other than master (yet?)
    • I did not (and cannot) check the validity of anything beyond the git repo
    • You don’t have a reason to trust me more than you trust OP… It would be nice if more people independently checked and verified against their own copies.

    I will be seeding this for the foreseeable future.





  • lsblk is just lacking a lot of information and creating a false impression of what is happening. I did a bind mount to try it out.

    sudo mount -o ro --bind /var/log /mnt
    

    This mounts /var/log to /mnt without making any other changes. My root partition is still mounted at / and fully functional. However, all that lsblk shows under MOUNTPOINTS is /mnt. There is no indication that it’s just /var/log that is mounted and not the entire root partition. There is also no mention at all of /. findmnt shows this correctly. Omitting all irrelevant info, I get:

    TARGET                                                SOURCE                 [...]
    /                                                     /dev/dm-0              [...]
    [...]
    └─/mnt                                                /dev/dm-0[/var/log]    [...]
    

    Here you can see that the same device is used for both mountpoints and that it’s just /var/log that is mounted at /mnt.

    Snap is probably doing something similar. It is mounting a specific directory into the directory of the firefox snap. It is not using your entire root partition and it’s not doing something that would break the / mountpoint. This by itself should cause no issues at all. You can see in the issue you linked as well that the fix to their boot issue was something completely irrelevant.


  • Essentially ULWGL will allow you to run your non-steam games using Proton, Proton-GE, or other Proton forks using the same pressure vessel containerization and runtime that Valve use to run games with Proton

    This is the crucial piece of information. In less technical terms: Proton is designed to run in a very specific environment and it might be incompatible with your system. Steam runs Proton inside a bubble so that it interacts less with your system and so the incompatibilities don’t become a problem. ULWGL aims to create the same bubble so it’s the correct way to run proton.



  • Grub can load booster images, the issue is about incorrect grub.cfg generation.

    What they’re saying in the issue is that grub-mkconfig will not create a correct “Arch Linux” menu entry for booster, but if you go to “Advanced options” and choose the “booster” menu entry it works. I can confirm this. It happened on the system I’m currently using.

    Specifically, the problem is that grub-mkconfig does not add the booster image to the initrd of the default menu entry. You can add it manually. For example I had to change this

    initrd  /intel-ucode.img
    

    to this

    initrd  /intel-ucode.img /booster-linux-zen.img
    

    If I recall correctly this issue was not present last time I set up a system with booster. It might be a regression or maybe it only happens in specific system configurations.


  • I use booster and it’s cool. I don’t see any noticeable difference in boot times but the image generation is much faster. mkinitcpio would take several seconds while booster takes about one.

    First time I tried it it didn’t boot because of something missing in the generated image. I tried a universal booster image (set universal: True in /etc/booster.yaml) and it worked. Technically this builds a larger image than necessary but it’s still only 34MB and takes a second to build, so I never bothered to troubleshoot what was missing. The universal image even handles luks encrypted root partitions without additional configuration of booster (you still have to configure kernel parameters).

    Another issue I noticed is that if you use grub-mkconfig and your only initramfs is booster, it will generate an incorrect main boot entry. It will add booster as an option in “advanced options” so your system is still bootable if this happens to you. The quick fix is to manually add the initrd entry under the main menuentry in grub.cfg.