Difference between revisions of "2022/11/30"
Jump to navigation
Jump to search
Line 1: | Line 1: | ||
<noinclude>{{page/date}}</noinclude> | <noinclude>{{page/date}}</noinclude> | ||
* '''2022-11-30''' [https://www.michaelgeist.ca/2022/11/freedom-of-expression-for-a-price-government-confirms-bill-c-18-requires-platform-payment-for-user-posts-that-include-news-quotes-and-hyperlinks/ Freedom of Expression for a Price: Government Confirms Bill C-18 Requires Platform Payment for User Posts That Include News Quotes and Hyperlinks] ([https://mas.to/@mgeist/109433136521715782 h/t]): <code>EXTRACTIVATE! EXTRACTIVATE!</code>... | * '''2022-11-30''' [https://www.michaelgeist.ca/2022/11/freedom-of-expression-for-a-price-government-confirms-bill-c-18-requires-platform-payment-for-user-posts-that-include-news-quotes-and-hyperlinks/ Freedom of Expression for a Price: Government Confirms Bill C-18 Requires Platform Payment for User Posts That Include News Quotes and Hyperlinks] ([https://mas.to/@mgeist/109433136521715782 h/t]): <code>EXTRACTIVATE! EXTRACTIVATE!</code>... | ||
− | * '''2022-11-30''' [https://web.archive.org/web/20221130144453/https://www.wired.com/story/effective-altruism-artificial-intelligence-sam-bankman-fried/ Effective Altruism Is Pushing a Dangerous Brand of 'AI Safety'] ([https://dair-community.social/@timnitGebru/109433230405573162 via]) {{fmt/quote|This philosophy – supported by tech figures like Sam Bankman-Fried – fuels the AI research agenda, creating a harmful system in the name of saving humanity}} {{fmt/quote|[[effective altruism|EA]] is defined by the [[Center for Effective Altruism]] as “an intellectual project, using evidence and reason to figure out how to benefit others as much as possible.” And “evidence and reason” have led many EAs to conclude that the most pressing problem in the world is preventing an apocalypse where an artificially generally intelligent being (AGI) created by humans exterminates us.}} There's just so much wrong with this. | + | * '''2022-11-30''' [https://web.archive.org/web/20221130144453/https://www.wired.com/story/effective-altruism-artificial-intelligence-sam-bankman-fried/ Effective Altruism Is Pushing a Dangerous Brand of 'AI Safety'] ([https://dair-community.social/@timnitGebru/109433230405573162 via]) {{fmt/quote|This philosophy – supported by tech figures like Sam Bankman-Fried – fuels the AI research agenda, creating a harmful system in the name of saving humanity}} {{fmt/quote|[[effective altruism|EA]] is defined by the [[Center for Effective Altruism]] as “an intellectual project, using evidence and reason to figure out how to benefit others as much as possible.” And “evidence and reason” have led many EAs to conclude that the most pressing problem in the world is preventing an apocalypse where an artificially generally intelligent being (AGI) created by humans exterminates us.}} There's just so much wrong with this... starting with the fact that any AI designed by these guys ''is going to be self-centered and destructive.'' |
Revision as of 15:06, 30 November 2022
Wednesday, November 30, 2022 (#334)
|
|