CULTURE

Yuval Noah Harari Explains How to Protect Your Mind in the Age of AI


You could say that we live in the age of arti­fi­cial intel­li­gence, although it feels truer about no aspect of our lives than it does of adver­tis­ing. “If you want to sell some­thing to peo­ple today, you call it AI,” says Yuval Noah Harari in the new Big Think video above, even if the prod­uct has only the vaguest tech­no­log­i­cal asso­ci­a­tion with that label. To deter­mine whether some­thing should actu­al­ly be called arti­fi­cial­ly intel­li­gent, ask whether it can “learn and change by itself and come up with deci­sions and ideas that we don’t antic­i­pate,” indeed can’t antic­i­pate. That AI-enabled waf­fle iron being pitched to you prob­a­bly does­n’t make the cut, but you may already be inter­act­ing with numer­ous sys­tems that do.

As the author of the glob­al best­seller Sapi­ens and oth­er books con­cerned with the long arc of human civ­i­liza­tion, Harari has giv­en a good deal of thought to how tech­nol­o­gy and soci­ety inter­act. “In the twen­ti­eth cen­tu­ry, the rise of mass media and mass infor­ma­tion tech­nol­o­gy, like the tele­graph and radio and tele­vi­sion” formed “the basis for large-scale demo­c­ra­t­ic sys­tems,” but also for “large-scale total­i­tar­i­an sys­tems.”

Unlike in the ancient world, gov­ern­ments could at least begin to “micro­man­age the social and eco­nom­ic and cul­tur­al lives of every indi­vid­ual in the coun­try.” Even the vast sur­veil­lance appa­ra­tus and bureau­cra­cy of the Sovi­et Union “could not sur­veil every­body all the time.” Alas, Harari antic­i­pates, things will be dif­fer­ent in the AI age.

Human-oper­at­ed organ­ic net­works are being dis­placed by AI-oper­at­ed inor­gan­ic ones, which “are always on, and there­fore they might force us to be always on, always being watched, always being mon­i­tored.” As they gain dom­i­nance, “the whole of life is becom­ing like one long job inter­view.” At the same time, even if you were already feel­ing inun­dat­ed by infor­ma­tion before, you’ve more than like­ly felt the waters rise around you due to the infi­nite pro­duc­tion capac­i­ties of AI. One indi­vid­ual-lev­el strat­e­gy Harari rec­om­mends to coun­ter­act the flood is going on an “infor­ma­tion diet,” restrict­ing the flow of that “food of the mind,” which only some­times has any­thing to do with the truth. If we binge on “all this junk infor­ma­tion, full of greed and hate and fear, we will have sick minds; per­haps a peri­od of absti­nence can restore a cer­tain degree of men­tal health. You might con­sid­er spend­ing the rest of the day tak­ing in as lit­tle new infor­ma­tion as pos­si­ble — just as soon as you fin­ish catch­ing up on Open Cul­ture, of course.

Relat­ed con­tent:

Sci-Fi Writer Arthur C. Clarke Pre­dict­ed the Rise of Arti­fi­cial Intel­li­gence & the Exis­ten­tial Ques­tions We Would Need to Answer (1978)

Will Machines Ever Tru­ly Think? Richard Feyn­man Con­tem­plates the Future of Arti­fi­cial Intel­li­gence (1985)

Isaac Asi­mov Describes How Arti­fi­cial Intel­li­gence Will Lib­er­ate Humans & Their Cre­ativ­i­ty: Watch His Last Major Inter­view (1992)

How Will AI Change the World?: A Cap­ti­vat­ing Ani­ma­tion Explores the Promise & Per­ils of Arti­fi­cial Intel­li­gence

Stephen Fry Explains Why Arti­fi­cial Intel­li­gence Has a “70% Risk of Killing Us All”

Yuval Noah Harari and Fareed Zakaria Break Down What’s Hap­pen­ing in the Mid­dle East

Based in Seoul, Col­in Marshall writes and broad­casts on cities, lan­guage, and cul­ture. His projects include the Sub­stack newslet­ter Books on Cities and the book The State­less City: a Walk through 21st-Cen­tu­ry Los Ange­les. Fol­low him on the social net­work for­mer­ly known as Twit­ter at @colinmarshall.





Source link

MarylandDigitalNews.com