• 5 Posts
  • 99 Comments
Joined 1 year ago
cake
Cake day: July 20th, 2023

help-circle



  • mryessir@lemmy.sdf.orgtoStar Wars Memes@lemmy.worldI'm getting old
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    7 days ago

    I really couldn’t enjoy them. It was going against my grain and I could tell every upcoming emotion upfront. It reminded me of these short-cutted youtube videos.

    Some scenes were nostalgic but I did indeed feel robbed for all the potential stories missed and overwritten.

    Since my friends had a good time I just focused on these few nostalgic moments which were nice to see after such a long time. You gifted me the opportunity to reflect which I appreciate.

    The prequels are god awful movies and you seem to have no issue with them.

    Hehe, you read me like a book. I even liked episode one very much.


  • mryessir@lemmy.sdf.orgtoStar Wars Memes@lemmy.worldI'm getting old
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    7 days ago

    When episode 2 was released I have already read at least 10 books from the star wars universe. Chronologically before episode 1 and after episode 6.

    The authors of these books exchanged concepts, aligned the universe across ther works and put care into consistencz between different reads. They probably even questioned george lucas about the possible future.

    And then there came disney, dumped across years of work and didn’t bother to align anything. This is why they suck hardcore to me. And then these films are dump and just money-grabbing machines.

    Fuck everything since disney. They simply suck hardcore.



  • Some elaboration of mine for doing this post:

    Once I helped organizing some huge event. Attending negotiations between a monopol-like company and the purchasing departments.

    Attendees required to be far from certain competition and even ruling participation out under certain circumstances.

    I am in favor of the doubt but there has to be more similiarities between these sponsor than to the common eye. So I posted this.





  • If you run qemu from CLI you get a window which grabs keyboard and mouse automatically. Ctrl+Alt+G (from the top of my head) releases the input devices so you can again navigate the host. The window is otherwise a default window for you display server.

    I find qemu from CLI way more transparent then these GUI-Applications since each vm is a readable, single script. So I recommend this.

    Regarding installation on iMac bare metal: If the kernel supporta virtualization you can expect to work flawlessly. If you have a dedicated graphics card you can only pass this (as well as dedicated devices like hdd’s) if you main board supports IOMMU.

    If it does all you need is the qemu man page to setup your vm.

    Why I prefer a qemu script to any GUI alternative:

    The entire script for passing RAM, GPU and a HDD is about 10 lines max. A default vm with tcg-emulation e.g. via libvirt etc. can pass 50 lines of xml easily.

    I recommend giving it a try. My workflow is: Place the install script in some directory. The default run script is placed in my ~/.bin/ You can combine these scripts but I find it way simpler to separate them (you would need more elaborate options mounting devices).









  • The EU will already have projects in development as far as my experience goes.

    What I do not know though but think applies: Such an act is legally binding for all member states. If they fight these things, they are allowed to propose at the EU court for adjustment in order to be aligned with the national law. This can postpone the national implementation for a few years.

    But it can only be revoked by a new act of the EU council.

    And they can simply ignore any new suggestion of the EU parliament if they like to.



  • The Debian community not already maintains a Chromium fork. How much does that cost?

    I honestly can’t and wouldn’t judge: Time, Resources, implicit know-how etc. are unknown to me.

    The human time needed should grow with the number of patches that need to be applied to the upstream code base, …

    jupp

    … because some will fail now and then.

    Forks are done due to different reasons. Therefore it depends why to fork. It could be possible that one feature diverges so much that applying patches isn’t enough. Especially patches in a debian sense, neither .diff/.patch-patches.

    This is what I refer to as “fatness” of the fork. The more patches, the fatter. It should be possible to build, packege and publish a fork with zero patches without human intervention, after the initial automation work.

    For a brief period, until something rattles on the build system. Debian patches are often applied to remove binary blobs due to licensing - Imagine upstream chooses to include M$ Recall into the render engine. You would need to apply extraordinary amounts of work. Maybe even maintaining a complete separate implementation. This would also imply changes on the build systems, which needs to get aligned continiously between both upstreams, now.

    Maybe I’m missing something obvious. 😅

    With each version you have to very carefully review every commit if you want to maintain compatability with upstream, in order to merge patches into your fork.

    When there are 50 devs working on upstream and you need to review every commit to assure requirement X, this alone is a hard path. If you need to also apply workarounds compatible with future versions of upstream, you need PROFESSIONALS. Luckily these are found in the FOSS community; But they are underpaid and worse: underappreciated.

    // plus I could imagine that things like chrome may even not be coming with the full test suite. The test suite of a browser are surely so huge I can’t even comprehend the effort put into it. And then bug tickets… Upstream says: Not in my version. Now the fork has to address these themselves! :)