• 1 Post
  • 17 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • All those folks in the 50+ age group that grew up with “Russia is enemy #1” are probably cycling through waves of intense work and prolonged orgasm.

    I wouldn’t be surprised if one of the first things considered in strategizing any armed conflict is whether they want Russia and China to know that we have X or are capable of Y. Russia has shown their hand. If they could do more, they would have by now.

    It has also taught NATO that Russia is still in the barbaric tactics mindset. Hospitals, schools, churches, shipping centers - they’re all valid targets. If Russia wants a position, they’ll level the entire town. That certainly changes the plans, of anyone thought they would abode by the Geneva Conventions.


  • Oh, I’ve just been toying around with Stable Diffusion and some general ML tidbits. I was just thinking from a practical point of view. From what I read, it sounds like the files are smaller at the same quality, require the same or less processor load (maybe), are tuned for parallel I/O, can be encoded and decoded faster (and there being less difference in performance between the two), and supports progressive loading. I’m kinda waiting for the catch, but haven’t seen any major downsides, besides less optimal performance for very low resolution images.

    I don’t know how they ingest the image data, but I would assume they’d be constantly building sets, rather than keeping lots of subsets, if just for the space savings of de-duplication.

    (I kinda ramble below, but you’ll get the idea.)

    Mixing and matching the speed/efficiency and storage improvement could mean a whole bunch of improvements. I/O is always an annoyance in any large set analysis. With JPEG XL, there’s less storage needed (duh), more images in RAM at once, faster transfer to and from disc, fewer cycles wasted on waiting for I/O in general, the ability to store more intermediate datasets and more descriptive models, easier to archive the raw photo sets (which might be a big deal with all the legal issues popping up), etc. You want to cram a lot of data into memory, since the GPU will be performing lots of operations in parallel. Accessing the I/O bus must be one of the larger time sinks and CPU load becomes a concern just for moving data around.

    I also wonder if the support for progressive loading might be useful for more efficient, low resolution variants of high resolution models. Just store one set of high res images and load them in progressive steps to make smaller data sets. Like, say you have a bunch of 8k images, but you only want to make a website banner based on the model from those 8k res images. I wonder if it’s possible to use the the progressive loading support to halt reading in the images at 1k. Lower resolution = less model data = smaller datasets to store or transfer. Basically skipping the downsampling.

    Any time I see a big feature jump, like better file size, I assume the trade off in another feature negates at least half the benefit. It’s pretty rare, from what I’ve seen, to have improvements on all fronts.




  • I’m sure it’s a fine service, if you want to use it regularly, but I just wanted 1 tiny thing. If they had a $1 for an obit or a page deal, sure. Instead, there’s this whole microcosm of bullshit where some are archived, others available, some omitted from public collections, some on different 3rd party sites, etc.

    The family paid for an obit. It wasn’t in the 1800s. The paper has been digitized. I should be able to go to the paper with the name, exact date, and city and find it. They literally say it doesn’t exist. Not that it’s on our archive site or our partner site, just nothing.

    I would have thrown a couple bucks to any of the sites for access, but no, I need to sign up for a subscription, give them all my details, get spam calls for the next 100 years, just no. Super frustrating.



  • If the surface is flat enough, start with a scraper. One of those single edged replaceable blade deals. That’ll quickly take off 90%, if not all of it. If the blade gets sticky and you want to made another pass, wipe it off with some oil (basically any, even olive oil). Then go for the other suggestions, like rubbing the sticker with oil and alcohol.

    This works much quicker than just oil and alcohol, BUT the surface has to be flat. If it isn’t flat, it has to be non-razor-blade-damage-vulnerable material.



  • I don’t WANT to agree, but I kinda do.

    We’re here because Reddit was shit on top shit, led by gaping anus. We all accept that Meta is the same.
    We didn’t want Reddit profiting from our work. Meta will do the same, only more competently.

    Defederation is useless at scale They can continually spin up new instances that act as spies and bridges to Meta’s area.
    Once enough Meta bridge nodes are woven into the Fedi, they’ll be masked by a backchannel to mask the exchange/activity.

    Someone plz tell me I’m wrong, but this is how I think things work in the background…

    • Bob creates a Lemmy node - @Zucc1.ughfuckoff. It has 3 users and basically shops around until someone in lemmy.world’s sphere allows federation. Zucc1 looks like any random, small instance.
    • Once federated, Zucc1 syncs to its connected Lemmy instances - for now there is no Meta connection.
    • Zucc1 can then federate with a bunch of other instances, including Zucc2.
    • This repeats for a few weeks, infiltrating Fedi. This could be happening now.
    • A new set of Lemmy nodes spin up and federate only with a portion of the spy instances. The spy instances don’t respect the federation rules, distributing portions of the Fedi sync back to the Meta connected nodes, masking the source and destination.
    • Once signed posts are received by the spy nodes, user names are swapped with a table synced by spy and bridge instances. @User1@T4server.threads becomes @User7@Zucc4.ughfuckoff.
      • The Threads user sees their message from @someone@lemmy.world (which can also be swapped if they worry Threads users care about any of this stuff).
      • The Lemmy user sees the message from @User@Zucc4.ughfuckoff.

    Probably easy to combat when it’s one instance here and there. If it’s constant and automated, federating would have to be paused until the spies are weeded out and there’s a better detection strategy. If they get a big enough network going, they could all dip out at once, change identity, and refederate back in as the Fedi network flips out because of all the sync mismatches. Just more new nodes joining in. They have the source code, so they can act differently from other instances as long as it doesn’t cause problems.

    Is this a realistic scenario or am I way off base? I feel like it has to be one of the two.



  • My point is that this argument makes as much sense as what I wrote, so it’s encouraging the you think it’s ridiculous.

    “Versus” is a valueless delineation separating two subjects. There are two groups: The people of the Fediverse and the people not in the Fediverse. Neither one is good or bad, and in fact, many are a part of BOTH. That self awareness cancels any perceived negativity. We’re all probably some level of “normie,” and I’ve never heard someone use that word without immediate laughter by all parties. Sure, maybe in the early 00s by grade school punks, but I don’t think anyone does or should care.

    The point you’re actually making, without articulating it well, is the lack of terminology for federated groups. No one wants to say, “I’m a member of a select federated Lemmy and Kbin instances within the larger Fediverse.” You want an affirmative set of terms, so that delineation can be made; you want to say, “The X have this, and the not(X) have that.” From there you can get to value judgements, based on the expression of X, and I’ll recognize your concerns. The ridiculousness of those terms not existing makes it VERY hard to claim intentional negativity/harm because it simultaneously draws attention that group X in this case doesn’t have their shit together enough to come up with a nickname or shorthand.

    “You’re better than us? What are you?”
    “Well, you see, I’m a part of a federated network of…”
    (Looks up - everyone left)

    So, until someone comes up with some non-super-cringe terms for this wonderful mess, the discussion is a waste of everyone’s time. And until then, I suggest taking it on a case by case basis. If someone is offended, tell them that’s not intended because we don’t have OUR shit together, ask them what they prefer, and use that term around them.


  • I 100% agree that word is cringe and I’m totally into the fediverse for the long haul, but we have to address the pachyderm in the room: The word “Fediverse” is just as cringe.

    I, … I’m sorry. I can read it in a document, but the second a human being types it, I can’t take it seriously. I don’t care if folks want to shorten it to something like the FI (Federated Instances). Yes, there are other uses of the word “federate”, but it immediately sounds like a federal intraweb domain or a group of Star Trek policy makers.

    “Fediverse” is “netizen 2.0.”
    “Fediverse” is “cruising on the information superhighway Pro.”
    Please tell me I’m not alone in thinking this.


  • No no no, it’s stereotyping and prejudice when OTHER people do it to US. WE should tell THEM that THEY are US, and by saying this to OURSELVES we have said it to THEM, so that WE know that THEY know, but now THEY are a THEM again.

    YOU don’t get it. WE get it. YOU should all be like US where there is no YOU and US, there is only the WE that is YOU and US, but thereis no YOU and US, there is only the WE that is YOU and US, but thereis no YOU and US, there is only the WE that is YOU and US, but thereis no YOU and US, there is only the WE that is YOU and US.

    Simple. See? You don’t? But, YOU must because there is no…


  • So you’re saying there are people who DO use “normies” and people that DON’T use “normies”. These are not two groups of people. Shit, I just joined this thread, so that makes ME one of YOU, and there’s OTHERS that aren’t here. Are WE the elitists? Or are THEY the “normies”? YOU said there’s no there’s no US or THEM, so EVERYONE is talking in this thread. ANYONE not in this thread must not exist because I know I exist, so YOU thread posters must exist, but wait, that makes ME an US and YOU a THEM.

    (I’m not trying to be snarky, but this argument is exactly as nonsensical.)


  • I’ve tried to warn people about them. I got a 10 pack early on while learning and it almost made me give up the hobby. Classic n00b mistakes? Some, but after I set that filament aside in a drybox, I had almost no problems. The only mistakes I made with those other brands were due to strategies I developed to rescue prints from IIIDMax’s garbage. I must have used 10-20 other brands over the next year, revisiting the cursed spools occasionally.

    I thought I could relegate the leftovers to my 3D pen. Somehow that satan-spawned plastic jammed it up. The pen is basically a soldering iron, a motor, and 2 gears. I’ve fed strips of PETG bottles cut by hand through it. The filament wasn’t precise enough for my no precision 3D pen.


  • Yup, I could see Reddit noticing a large number of comment edits and bumping that timeout higher.

    Also, many users probably don’t know what forks are and might just run the first version of PDS they see, see that it isn’t “working”, and give up or wait for an update that never comes. The userscript is easier to understand because it visually performs the same actions a person would take to edit comments. And it’s more fun to watch than PDS’s progress bar. :)


  • There are a couple ways to handle this.

    1. This happens with PowerDeleteSuite’s main version, but one of the forks has a 5 sec pause between deletes that does work. It’s very slow (~1000 comments edited in 2 hours), but works reliably.
      https://github.com/deestan/PowerDeleteSuite

    2. This userscript will manually overwrite comments manually. You go to your comments page, scroll until there are no more comments to load, and run it. It’ll go 1 by 1 - edit, new text entered, save, 3 sec pause. By default the replacement text is a link to the script, but you can just edit that to be whatever you want (and you should edit it to make it harder for Reddit to batch restore).
      https://greasyfork.org/en/scripts/468337-so-long-reddit-thanks-for-all-the-fish

    Note 1: Method #2 doesn’t use the API, so should still work on July 1. PDS does use the API, so might not.
    Note 2: Both methods allow you to continue to use the browser in another tab or window (at least in Firefox). Some browsers and extensions will suspend or unload tabs if they haven’t been viewed in a while, so best policy is to do this in a separate window.
    Note 3: The default PowerDelete Suite should be fine for exclusively deleting comments if you run it several times. Just keep running it until nothing shows up under comments. To be sure, once your comments are all gone, wait an hour, and check again. If anything remains, use method #2.

    (Overwriting account #7 right now! Using PDS with 5 sec pause has worked flawlessly.)

    Pass this along to anyone having trouble!


  • I’m generally a Windows user, but on the verge of doing a trial run of Fedora Silverblue (just need to find the time). It sounds like a great solution to my… complicated… history with Linux.

    I’ve installed Linux dozens of times going back to the 90s (LinuxPPC anyone? Yellow Dog?), and I keep going back to Windows because I tweak everything until it breaks. Then I have no idea how I got to that point, but no time to troubleshoot. Easily being able to get back to a stable system that isn’t a fresh install sounds great.