I’ve never tried chemex, so I’m not sure 😅
Chemex seems better for making bigger batches of coffee.
For CPU, It seems like meteor lake has caught up with M2.
Better watch youtube from a piped instance.
This is from Jonathan Gagne’s Instagram post.
SmartTube is more optimized for TV: https://github.com/yuliskov/smarttube
Give it a try.
Same here! I finished reading it in a couple of days only.
Tutanota is now Tuta
I just bought Legends & Lattes because of seeing the nominees. I haven’t read fantasy books in a while and it seems like it’s going to be a cozy read.
Aw shucks…
For now I just have to remember not to automatically click on IG links. Need to copy and clean it up first 😔
Thanks.
I wasn’t aware of this. Is there any way to turn it off? Thanks
I’m aware. So even though I have to use them, I try to limit trackings as much as I can.
I’ve done that. That’s the issue.
/u/Lucid5603@lemmy.dbzer0.com found the PDF:
This is awesome. Thank you 🙏
What about preserving languages that are close to extinct, but still have language data available? Can LLMs help in this case?
First time hearing about it. Link for others who are also wondering what Microsoft Pluton is:
While whether LLMs are intelligent or not is still hotly debated. I think the author’s thoughts are very interesting.
This is crazy to me. You can read in a stream of meaningless numbers (tokens) and incidentally build a reasonably accurate model of the real things those tokens represent.
The implications are vast. We may be able to translate between languages that have never had a “Rosetta Stone”. Any animals that have a true language could have it decoded. And while an LLM that’s gotten an 8 year old’s understanding of balancing assorted items isn’t that useful, an LLM that’s got a baby whale’s grasp on whale language would be revolutionary.
The article is predicting that smartphones and movie cameras might adopt this.
Yeah, for heavier load, Meteor Lake and M2 seem to be close; but for light load, M2 is still much better.