You can also just post the 4-5 data items without claiming that this is low or high credibility or bias. Then let the people make the decision. Like this maybe:
“Based on source X, this source media bias is:
Methodology of X is at: “
I find it quite common (and confusing) for certain news types like policy, eg “party A reverses the disapproval to oppose the once-unacceptable ban”
I mean, this article is from 2022, which claims to use seaborn but not really. It really shows their effort, even before the whole AI hype …
https://www.geeksforgeeks.org/how-to-create-a-stacked-bar-plot-in-seaborn/
cuts parts
that’s actually what the underlying method does, as this is extractive summary, hence it mostly cuts and stitches things.
From my naive understanding, this type of method does not use or “understand” context.
The alternative is abstractive summary, which is where LLMs (or even small/medium language models) are good for. But I suspect that would be a controversial choice on lemmy.
I’m also curious. A quick search came up with these. Not sure which one is most reliable/updated
Many things are called “AI models” nowadays (unfortunately due to the hype). I wouldn’t dismiss the tools and methodology yet.
That said, the article (or the researchers) did a disservice to the analysis by not including a link to the report (and code) that outlines the methodology and how the distribution of similarities look. I couldn’t find a link in the article and a quick search didn’t turn up anything.
you should try to ask the same question using xAI / Grok if possible. May also ask ChatGPT about Altman as well
welp, guess you’re right. It’s not common but not just a few someone’s either.
tell me more about the “almost” part …
I think porn generation (image, audio and video) will eventually be very realistic and very easy to make with only a few clicks and some well crafted prompts. Things would just be a whole other level that what Photoshop used to be.
re: your last point, AFAIK, the TLDR bot is also not AI or LLM; it uses more classical NLP methods for summarization.
If you suspect that it’s been modified, try going to places like the internet archive or archivetoday to check. The claims you’ve made seem big, so back them up with sources.
Is there a database tracking companies that start out with good intentions and then eventually gets bought out or sells out their initial values? I’m wondering what the deciding factors are, and how long it takes for them to turn.
Daredevil (the design and music is sick) and The Morning Show (the animation is very captivating to me)
re 1: out of curiosity, do you encounter dnsleaks when using wireguard?
re 4: you can also check out https://starship.rs/, which helps configure shell prompt very intuitively with a toml file.
Hold up, are you sure you can’t view Discussions or Wiki? Which sites can you not view them?
I’m fine viewing them for public repos that I usually visit.
Asking to make sure that Github is not slowly rolling out this lockdown.
If you’re not already aware, take a look at tree pod burial, depending on country/states.
for example, look at https://www.greenmatters.com/p/tree-pod-burials
this article from the same site lists the availability in different states in US: https://www.greenmatters.com/sustainable-living/what-states-allow-green-burials
sounds like a missing episode of iZombie.
Reminds me of this article https://www.alexmurrell.co.uk/articles/the-age-of-average where the author pulls in different examples of designs and aesthetics converging to some “average”.
I’m feeling conflicted with these trends, on one hand it seems like things are becoming more accessible, while on another, feels like a loss.
This especially may be relevant with generative AI - at least for the very few generative arts I look at, at some point they start to feel the same, impersonal.
I use gitlab ci mainly and dabble in github actions. Can you clarify how “Not even Github managed to pull that off”? IIRC, actions is quite featureful and it’s open-source, so I assume that can be run with self-hosted runners as well.