![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
Because you said that was not the point of the article and I asked you to clarify why you think it wasn’t. But never mind. This is going nowhere.
Because you said that was not the point of the article and I asked you to clarify why you think it wasn’t. But never mind. This is going nowhere.
Why not? If the starting point of the article is that we can’t design interfaces based on our elitist 5 percenter knowledge then the remedy for that would be…?
I’m using a lepotato for Home Assistant. Works very well for months now, but I’m a bit worried about long term distro support
I wonder why user tests aren’t even mentioned once in the article. If you design an interface you have to test it with your audience
Did you know you can edit your posts? Could be helpful for other readers since you were incorrectly posting in several messages that wine needs root access.
There is a german saying “bewachte Milch kocht nie” (watched milk never boils)
There are some media reports claiming that there was simply another unidentified participant in the conference call and they didn’t notice. 🤷
I don’t agree to starting with cheap gear as always being a good choice. If you already know what good coffee tastes like and you like to get into the coffee game then buying a cheap (non burr) grinder is just a waste of money.
Media Corporations should not have a say in disconnecting users from the internet based on copyright infringement. The right to social participation is part of a basic human right - self-determination. Today, the majority of interactions with society involve communication via internet in one way or another, so that access to the internet is vital for enabling social participation.
It’s a little bit worse than that in fact. “Programmiererinnen und Programmierer” or “Programmierer:innen” or “Programmierende”. And if you get it wrong you are not a grammar nazi but more of a regular nazi.
/s just in case
Technically it’s not the same, in case of IMAP they would need to literally put spam mails into your account. As opposed to having visual elements in the UI that pretend to be an email. Might not feel like a big difference but actively poisoning the users inbox is pretty bad.
Having a dedicated technical architect who hovers above the dev team handing architectural decisions down is also not always seen as an ideal construct in software development.
If you tell your kid McD is something special then whenever you pass a McD it will feel a craving and want it. This is how brand obedient consumers are made. If instead you let them have McD for a week or two they will see the food for what it is.
Fsst food chains hate this simple trick
You forgot that he is also responsible for at least part of the 1 million Covid related deaths in the USA. His unscientific bullshit had a huge impact on people who believed that Covid is nothing more than a common cold.
Ok, maybe it helps to be more specific. We have an LLM which is based on a broad range of human data input, like news, internet chatter, stories but also books of all kinds including those about philosophy, diplomacy, altruism etc. But if the topic at hand is “conflict resolution” the overwhelming data will be about violent solutions. It’s true that humans have developed means for peaceful conflict resolution. But at the same time they also have a natural tendency to focus on “bad news” so there is much more data available on the shitty things that happen in the world which is then fed to the chatbot.
To fix this, you would have to train an LLM specifically to have a bias towards educational resources and a moral code based on established principles.
But current implementations (like ChatGPT) don’t work that way. Quite the opposite, in fact: In training, first we ingest all the data that we can get our hands on (including all the atrocities in the world) and then in a second step we fine-tune the LLM to make it “better”.
Don’t want to spoil your little circlejerk here, but that should not surprise anyone, considering chatbots are trained on vast amounts of human data input. Humans have a rich history of violence with only brief excursions into “collaborating for the good of mankind and the planet we live on”. So unless you build a chatbot that focuses on those values the result will inevitably be a mirror image of us human shitbags.
That looks like advice on how NOT to ask for technical support on a public forum.
What about an F-150? There’s plenty of those and last time I checked 150 is more than 15.
Probably AI generated
From the article: “It is unlikely the Department would ever pursue action against anyone using the Logan Act, given no one has been convicted of violating the 1799 law”
End of story.