46 Comments
User's avatar
Tim Garrison's avatar

Snowed for us in WA too. Got a day off work and am sucked in to the longest uninterrupted prolonged court proceedings I can remember since OJ. Question for you all... Why not just let AI try cases against people? AI can dig up all the evidence that people can, and could probably even interview or depose witnesses. Also, AI can't sleep with its colleague (as far as I know). This whole thing reveals the paranoia and taboo about sex in our culture. Two lawyers working on a case together are entitled to spend time together, buy each other lunch, do whatever they want, but if they touch each other it becomes an all day television spectacle!!!

Expand full comment
Ryan's avatar

Hey off topic but how do I join the book club? Thanks!

Expand full comment
Chris Ryan's avatar

Send me an email and I'll explain it. thatchrisryan at gmail.

Expand full comment
Java Shann's avatar

given the USA gov can't seem to be truly scientifically innovative outside of the $$$$ to the "defense" industry. also given, dumb bombs are already a threat to human survival. good chance AI advances will lead to a clockwise vs counter-clockwise doomsday clock tick

Expand full comment
Guustaaf Damave's avatar

AI will shake things up big time. Not because it will become evil and wants to take over the world, but because intelligence is so universally applicable and how fast it is coming on the scene. One of the dangers will be an increase in depression and mental illness that I recently wrote a blog post about: https://guustaaf.substack.com/p/how-ai-can-cause-depression-and-mental

Expand full comment
Dryland Fish's avatar

Maybe a decade ago I read a short story by Marshall Brain called Manna. It highlighted what the author saw as the worst and best possible outcomes of AI. He's evidently made it available free online here:

https://marshallbrain.com/manna1

Have to say I find the first half (worst outcome) far more plausible than the latter under the brutal social and economic policies that have come to be accepted in the US today. Worth a read though!

Expand full comment
Marian's avatar

yes, people dont like change. Oooh, scary AI!

Most of why this is silly, is already explained very well in this thread, but here is one more thing to consider: humans are afraid of AI is because we anthropomorphize. So we assume that AI will be driven to domination, but forget that it takes an ego to have that drive. Humans are the only species that kill for fun and so we expect to be taken down by a creature even more nefarious than ourselves. Well, AI is a product of humanity and as such, regardless of new features, sensors and models, will still be but only a shadow, a reflection of humanity, not its true representation. Sure, you can make AI soldiers and the destructive power can cause real damage, but I still believe we would be brought down by something brainless like a virus before computers grow sentient and decide that a world without their creators is better.

Expand full comment
Chris Ryan's avatar

I don't know about your assertion that humans are the only species that kills for fun. Ever watch a cat "play" with a mouse or knock a bird out of the air? I've known dogs that kill cats for no discernible reason. Raccoons, foxes, coyotes will sometimes kill a whole flock of chickens (eating none). There are endless examples of unexplained killing in nature.

Expand full comment
Marian's avatar

Yes, inaccurate statement on my part. That part can be removed as not super relevant to the point I was making, just an embellishment.

If I were to rephrase this at all, it would be something like this: humans keep coming up with better ways to kill others for fun (ie: anything that's not self defense/survival), which I believe sets us apart.

Expand full comment
Decumus Scotti's avatar

If the possibility of an intelligence explosion is literally anything above zero percent, which I think it certainly is, then the alignment problem is the single most important problem humans have ever faced and will ever face.

Expand full comment
Scott's avatar

Hi Chris,

I think AI highlights what we really are;

creative beings of a spiritual nature

( whatever that is).

AI will be useful in law and politics as it has no personal agenda doesn’t lie and doesn’t have friends in high places.

AI will free up people to be creative and enjoy life. Not just duped into thinking we were put here to work and make rich people richer. Also given the false aspiration if we work hard enough we’ll get a piece of that pie.

As usual something new appears on the horizon and the fear mongers start rattling their cans to get attention.

Bit of caution and curiosity helpful in this situation.

Cheers

Scott

Expand full comment
Ruka's avatar

If AI is programmed by humans, it could indeed have some sort of agenda. AI is trained on algorithms and data sets, which could be erroneous or have unconscious human biases embedded in them.

Your take is optimistic. How nice it would be if

AI were actually used in pursuit of human freedom from the mundane and the horrific aspects of civilization. However, if AI replaces many of these jobs, how will people live? I doubt our governments will step in with UBI (especially not the US).

I don’t think AI is going to kill us all (that will happen in some other way related to global warming/overshoot), but I don’t think it’s going to be some sort of panacea on a societal level either.

Expand full comment
Andreas H.'s avatar

What an idea! UBI! How do you dare?

Expand full comment
Jezeriah Hopkins's avatar

I’m sure early man's first experience with fire was met with tremendous trepidation and fear. Many probably thought, to some extent, fire shouldn’t be a thing we should be messing with. That it would ruin life as we know it. Who really knows what tribal non-linguistic proto-homosapiens thought, but deductive reasoning intuits that I’m probably not far off.

And now look where fire has taken us today.

Ultimately AI will most likely lead to great things. Might reduce the population when there are fewer and fewer jobs for people.

As far as AI becoming “conscious”, unless consciousness is an emergent computational phenomenon, then doubtful. Modern entertainment media has a grip on the minds of man the way the bible did some time ago. Meaning, people watch too many movies.

And some of the biggest contributors to our modern techno-mediated world are some of the biggest fucking tools. We’re literally being cocooned in their sublimation. (That's a whole other topic)

My issue with not only AI but technology, is people's dependence on it.

For example, when astronauts stay in space for long periods of time, hanging out in the international space station for a year, when they come back they need to undergo physical therapy because their bodies have atrophied from non use.

Perhaps the same thing is happening to the mind. We’re offloading logic, reason, and rationality and letting digitized computation think for us. Most people I’ve experienced, when they cannot remember something, pull out their phone. Are we offloading our capacity to memorize? What prolonged effect will this have? Especially in childhood development, one of the most crucial and important stages in our lives, now that younger and younger people have access to these devices.

The human condition is atrophying in the shape of technologies second hand will.

We digitize the mind to gamify it. Our deeply rooted fears of chaos, death, and crisis, will unconsciously externalize into the insatiable need for power, control and greed. And unless we clean these repressed fears we’re destined to become passive obedient slaves to our unconscious, giving in to controlled subservitude to whoever owns the biggest computer.

Or maybe I’ve just watched too many movies.

Expand full comment
Brendan's avatar

In one of Plato's writings, Socrates mentions how the invention of written word will destroy the fitness of people's memories because instead of remembering something, one can just write it down (I heard this antidote recently reading the book Reality+ but found it here as well: https://fs.blog/an-old-argument-against-writing/). Not that AI necessarily has the same implications, it could be much worse for our minds and I agree we should be cautious with it.

I think at the least the AI revolution requires us to reinvent how we educate people. It seems like education should focus on developing critical thinking skills and giving students the general tools to analyze and synthesize information, rather than focusing on regurgitating facts and subject-specific skills. Which was already a necessity with the internet before AI. And our education should give us the clarity of thought and self awareness to avoid becoming, as you mention, subjects or objects of fear, control, and greed.

I feel like some aspect of education should involve strengthening character and give us some sort of rite-of-passage into adulthood that most youth in civilization lack these days. Perhaps our secondary education should involve a year of working for the community - like volunteering as an EMT, helping the homeless, or even military service (If the U.S.'s armed forces were a more ethical one). As an introverted and awkward high schooler, i'd hate being forced to back then, but I think it would have saved me a lot of trouble learning life lessons in my early 20's, and made me a more confident and competent person.

Expand full comment
Jezeriah's avatar

Yes, a complete overhaul of the education would be a major plus for improving a healthy mindset in the world.

Schooling now is just to turn one into a proverbial cog for the proverbial machine like spending unnecessary amount of time on grade school learning cursive weighting just so we can sign our names on legal documents. (Not sure if cursive wrighting is still a tging like ot was for me in the 90s)

And yes, there's been a lot of upheval and consern regarding all new technologies. The printing press, the internet, .. others, but continuing with the astronauts in space analogy, i feel we're spending more and more time in "space".

Things like a better education system will greatly diminish the black whole in everyone that needs to be fed, that exacerbate addiction, obsession, dependency and obedience.

There's always going to be that, but there are ways of greatly diminishing it.

Expand full comment
Brendan's avatar

From what I've heard listening to those on the more skeptical side such as Chomsky and physicist Sean Carroll, one thing separating our intelligence from that of the current AI large language models is that AI does not model the world. When humans think of a real world object or concept, if we have enough experience with it we have a general idea of how it operates in the world through space and time.

AI can give amazingly fast and accurate answers on certain things, but can be tricked by scenarios that are simple for humans to understand. One example I think Carroll has talked about was telling ChatGPT that someone heated a pot up on the stove and then moved it away from the stove, then asking if it would still be hot 24 hours later, and ChatGPT said yes (it was something like that...). While humans intuitively understand why the pot wouldn't be hot because we operate in the world with our senses and know lots of general things about heat and time passing and stoves and pots, the AI's experience of learning is built on consuming and predicting text (Most basically, ChatGPT is just predicting the best next work, but this prediction is based on mountains of data and training). It can be told it's answer is wrong and correct itself for next time, it probably already has for this type of scenario, but it doesn't intuitively know why it was wrong.

Humans and animals, taking things in with our senses, learn a lot with little information, while AI based on large language models learn by brute forcing a TON of information with a ton of trial and error. Also, AI doesn't have a 'stream of conscious' like we do. As far as I understand, you input a question, the AI processes it and returns an answer. Unless it's actively responding to input or being trained, it's not 'thinking' anything.

It will be interesting to see if this form of AI, based on text prediction, ever does eventually model the world like us. Maybe enough data and training will lead to a phase transition where it starts to. But it's hard to imagine certain understandings developing if AI's 'thinking' is done entirely with text/image processing, without seeing, feeling, tasting, smelling, and hearing, and then putting all these senses together like we do.

I'm skeptical of it being an existential threat any time soon, but AI as it is now is still pretty amazing for what we use it for. There's probably plenty of industries involving analyzing text and data that will be affected. I think it might cannibalize the search engine industry a lot. I've noticed the accuracy of search engines declining when I search for specific technical questions due to the spaminess of the current internet, and ChatGPT a lot of times gives better answers. Ironically(?), it will also contribute to the decline of the usefulness of search engines and the web in general as more content online is generated by AI for $$$ farming purposes.

Sorry for the rambling essay.

Expand full comment
Stanley Krippner's avatar

I recall Y2K, Mayan calendar, Ebola, and the Second Coming. I took none of them too seriously. As for AI, I hope it is not a retelling of "The Boy Who Cried Wolf." In the meantime, I have used AI to get me started on writing assignments, and it does a fine job. Errors of course, but "Nobody's Perfect" (as Joe E. Brown said at the end of "Some Like It Hot."

Expand full comment
Vandieman's avatar

Worth bearing with the background noise here

As Zac Rodgers offers an excellent argument for why AI - chat gpt for example - is a just a plagiarism technology, is strategically ambiguous in relation to capital and could just be as Chris mused another false dawn.

https://podcasts.apple.com/de/podcast/the-big-tent-podcast/id1339367201?i=1000625395878

Expand full comment
Ian's avatar

Being seen as a threat to human survival I think is a far cry. People can debate "consciousness" and "sentience" of AI back and forth all they want. For me, what I think is the biggest delineation between humans and AI is the unbound nature of humans. We can operate independently. We can make decisions, movements, and actions with only the freedom of our internal interests/desires, if we so chose. I imagine someone will check me with an article that highlights this potential in AI. However, I think most AI, and most that will be developed in the near-to-mid future, will require input or direction from a user. AI won't spontaneous begin to optimize the business models of every company. Humans will employ an AI to do so. In that, I believe we humans won't create on overwhelming amount of AI that could replace humans by force.

That said, it will dramatically change the way we live as a society. That hype isn't overblown, but it also cannot be fully understood until it happens. When social medias first came out, no one anticipated it would lead to genocides in southeast asia, a dramatic increase in depression/suicide for middle school girls, or a re-wiring of people's brain chemistry to shorten attention spans and become addicted to "cheap" seratonin dumps in the form of influencer reels. There may be a fabricated hysteria about how our lives will change, but I imagine humans will work to salvage the things we are already actively concerned about. What we will lose more of are the things we aren't currently anticipating. AI may make it financially unfeasible to not mass-produce food without CRSPR and GMO technology. Not only highway trucking, but all heavy machinery (planes, tractors, trains, etc.) will become computer operated. A lot of pop music will be AI produced, if not also "sung"... I'm not coming up with great original ideas here, but I hope the point stands.

Expand full comment
Andreas H.'s avatar

I think that this article from Ted Chiang provides some valuable information about AIs.

https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web

I hope you can read it, after the first article it's behind a paywall. It's worth to clear the cookies and website data for reading it.

I asked to 2 experts about their opinions on the article and they both said it is a reasonably good explanation, at least how good analogies can be.

Expand full comment
Kevin Russell's avatar

Currently, we are primates perpetuating piss lines with Promethean fire. If we were to all wake up tomorrow with amnesia and take an account of the world that's wired on hair triggers to explode, the concern of AI would not be at the top of our list to debate the pros and cons of. I think we would leverage it to help us dismantle this preposterous position we have found ourselves in. It would give us a mirror of who we were and who we can choose to be without the felt connection to our psychotic past.

Expand full comment
Ruka's avatar

This makes much sense to me.

Expand full comment
Clint Brown's avatar

I agree with Andreas, difficult to say but with the way things seem to be trending, a proper zoo for our species might not be too bad. I'm starting to feel like the older people we joked about as kids who would always say "Ahhhh, the world is going to hell". I'm always asking myself, is the world really getting that bad, or has the world always been this bad and I just had to get a little older and mature to realize it. Although, I do think things are changing at a more rapid pace than ever before so we are in new territory in that regard. Hopefully the AI we create will treat us like a good dog owner treats its dog. That wouldn't be too bad, things could be worse.

Expand full comment
Alias Maximus's avatar

Serious question guys! 49ers or Chiefs?

Expand full comment
Chris Ryan's avatar

9ers. I love the Brock Purdy story. And his mom is hot!

Expand full comment
Alias Maximus's avatar

YES! Born and raised in the Bay as a life long 9er fan I hope to see them finally win one in my lifetime. Also hoping to see the camera pan over to Purdys mom and not Taylor Swift so much

Expand full comment
JedB's avatar

Not a major issue for creatives (yet). It will be eliminating some low hanging fruit (jobs) in the service industry anytime now. I have some experience of coding and early AI experimentation , but few have the computing power to run the latest models at full strength.

Panic? No.

Unregulated capitalism? Yes.

Expand full comment
Midwest Timecapsules's avatar

I’m a bit scared of it. I have no idea the full ramifications of AI but seeing that it may start writing books, screen plays, and otherwise creating art much more efficiently than any human, makes me feel nauseous. I see a future with cheap AI generated children’s movies programming them to enjoy an AI generated future. Maybe we will be safer and perhaps even work less but we will surely be more disconnected, more depressed, and less interesting people. It’s a Brave New World out there.

Expand full comment
Andreas H.'s avatar

The first thing which came to my mind was “Prediction is very difficult, especially if it's about the future. “

The other was https://xkcd.com/1968/ (mobile version: https://m.xkcd.com/1968/)

The third was the headline of Heidegger’s last interview “Only a God Can Save Us” (https://en.wikipedia.org/wiki/Only_a_God_Can_Save_Us ). Maybe we're just about to create the God and will end up in the proper zoo, suited to our species, which Chris mentioned oftentimes. Although this exactly is the opposite of what Heidegger meant, whom I don't like anyway.

I think things will change drastically and I wonder how many such changes I am able to cope with. During my lifetime there were quite a few already (collapse of Soviet Union and Warsaw pact states, 9-11, internet, smartphones, covid). And now AI and crispr.

Expand full comment
Rod Miller's avatar

"Maybe we're just about to create the God and will end up in the proper zoo, suited to our species."

It seems to me that humans somehow have an instinct that drives them to Worship. As long as Westerners believed in God, there was no problem. But when they stopped believing they started worshipping themselves (which is what we do in consumer society). Celebrity Culture is another dimension of this — today we also worship celebrities.

Now some Silicon Valley types are talking about "creating God" (AI). This process will be greatly enhanced by our need to worship.

Expand full comment
Chris Ryan's avatar

Fascinating idea: that the need to worship can be deflected but never extinguished. SOMETHING will be worshipped, whether it be an abstract idea (freedom, Yahweh, progress), some symbol, ancestors, Gucci bags...

Expand full comment
Andreas H.'s avatar

I did not mean it very seriously. Was more kidding.

Expand full comment
Midwest Timecapsules's avatar

I think you’re right. We at least have a tendency to worship. But I’m sure folks define worship differently. I think to worship is to submit to a narrative. I believe humans have a tendency to embed ourselves in some narrative. Religion served that role just fine until recently.

Expand full comment
jean campbell's avatar

I hear AI does make mistakes so that needs to be sorted out before we can know how well it will do in the job market.

Expand full comment
Richard Schweid's avatar

I agree with Bill Andresen. It will definitely not disappear like y2k did, but will have significant positive and negative ramifications...

Expand full comment
Chris Ryan's avatar

Ricardo! I didn't know you were reading this. Welcome. Miss you, bud.

Expand full comment
Bill Andresen's avatar

Isn’t it possible it’s a little of both. Will most likely have a negative impact on the people who lose their jobs but likely very beneficial for those impacted positively. Most promising is are the benefits in the biomedical realm where it promises to diagnose and prescribe treatments better than humans.

Expand full comment
Jason Lightfoot's avatar

I disagree, bedside manner is the number one issue facing medicine. We have lost human to human connection and empathy. The medical system is in need of compassion it AI

Expand full comment
Pete Fulford's avatar

If AI helps keep people alive longer, what are the consequences to our over-population problem? That’s only going to get worse. Thanks AI.

Expand full comment
Noah Kuvorem's avatar

I started discussing AI risk with my best friend back in 2015 when we discovered Eliezar Yudkowski and the LessWrong forums. Yudkowski is not the most careful thinker, but he got a lot of smart people talking about this issue a good while ago, and since then, they've made some compelling arguments about the risks.

These arguments are not generally approachable enough to generate clicks, and so generally don't appear in articles or media conversation since AI went mainstream. But that early intellectual community influenced lots of people in tech, which is what's caused many of them to come out to loudly and forcefully about the dangers so early on. So any time I hear someone say that it's no different than the printing press or it's just sensationalism, I feel they're generally missing a big part of the plot.

Expand full comment
Chris Ryan's avatar

But the printing press was a HUGE deal, no?

Expand full comment
Rod Miller's avatar

More language peeves.

When I left the English-speaking world in the '70s, a widely used synonym for "permit" was "allow" — you allowed something. Now you have to allow FOR something. The problem with this is that "allow for" used to mean Take Into Consideration. This has been lost.

Language peeves are a dime a dozen of course. And language changes all the time, so new things are bound to wander in & grate on da ear. But some of them sound so wrong.

Another one is "going forward" to mean In Future. Is that as opposed to "going backward"?

Expand full comment
Chris Ryan's avatar

And when did people stop pronouncing the "r" in "going forward?" They say, "going fo-ward." Weird. And when did "disinterested" become synonymous with "uninterested?" I fought that one for years, but I've given up at this point. Old man shouting over.

Expand full comment
The Inquiring Yogi's avatar

Essentially everything is to generate clicks and sell papers. We have to assume that everything we read has been exaggerated, because it has. A person sat down and thought of the best hook or shock factor for any story. All to elicit some kind of reaction from you, because that’s the only way they know if it worked, and if the content was actually consumed.

Expand full comment
Chris Ryan's avatar

And given the algorithmic universe we now live in, that reaction further increases visibility and engagement in a cascading series of inane nonsense amplification.

Expand full comment