From b10874c0a951af67dd8cf4f0de8f3b4cecd6af02 Mon Sep 17 00:00:00 2001 From: Robert Prehn <3952444+prehnRA@users.noreply.github.com> Date: Sat, 9 Mar 2024 10:40:29 -0600 Subject: [PATCH] feat: Update link log --- site/link-log.yaml | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/site/link-log.yaml b/site/link-log.yaml index 0e1f495..2bc57c8 100644 --- a/site/link-log.yaml +++ b/site/link-log.yaml @@ -1,3 +1,27 @@ +- url: https://darthmall.net/weblog/2023/rss/ + name: "Evan Sheehan: RSS?" + summary: > + There's a swing back to RSS right now, which I think is good. But I also think that Evan's + thoughts here are good. RSS can't be the only solution for how we take in the web. It can't + be the only solution for how we decentralize the web again. + + Hat tip to [Greg Morris](https://gregmorris.co.uk/2024/01/29/visit-more-blogs) for his related + post that helped me find this one. +- url: https://ethanmarcotte.com/wrote/generative/ + name: "Ethan Marcotte: Generative" + summary: > + A collection of quotes about AI from 1683 to 2024. +- url: https://seldo.com/posts/ai-ml-llms-and-the-future-of-software + name: "Laurie Voss: On AI, ML, LLMs and the future of software" + summary: > + A level-headed explanation of what exactly AI, ML, and LLMs are. "LLMs are really complex markov + chains; but the really complex part makes them qualitatively different" has been by go-to + explanation of LLMs. I disagree, however, with the idea that LLMs "understand" anything. LLMs + contain big statistical models of sentence and paragraph structure, and of the relationship + between words and phrases. This allows them to generate text that is more of a statistical match + for text written by humans. That humans see this as "understanding" is a form of pareidolia. + + This distinction is narrow, but important. - url: https://coryd.dev/posts/2024/towards-a-quieter-friendlier-web/ name: "Cory Dransfeldt: Towards a quieter, friendlier web" summary: >