テキサスホールデム戦略

<ウェブサイト名>

<現在の時刻>

出典: 標準

This page uses Javascript. Please enable Javascript in your browser. Skip to Content Japanese English Chinese About RIETI Site Map Help Contact Discussion Papers Search Site Search Menu Research Areas Research Programs (FY2024-2028) Research Programs (FY2020-2023) Research Programs (FY2016-2019) Research Programs (FY2011-2015) Policy Research Domains (FY2006-2010) Projects Survey Fellows According to Title Alphabetical order RIETI Alumni Research/Policy Papers Discussion Papers (English) Discussion Papers (Japanese) Policy Discussion Papers (English) Policy Discussion Papers (Japanese) Technical Papers (English) Technical Papers (Japanese) Non Technical Summaries List of Articles in Journals Research Digest Discussion Papers Search Publications RIETI Books (English) RIETI Books (Japanese) History of Japan's Trade and Industry Policy Authors' Words Other Publications (English) Other Publications (Japanese) Events Symposiums Workshops BBL Seminars Archived Seminar Series Data JIP Database R-JIP Database CIP Database Industry-Specific Nominal and Real Effective Exchange Rates AMU and AMU Deviation Indicators JSTAR RIETI-TID RIETI FDI Database ICPA Project Links Articles Column Special Series Newspapers & Magazines Fellows' Works VoxEU Column From IZA Perspectives from Around the World Other Contents RIETI Report Policy Update Keizai Sangyo Journal (METI Journal) Research Areas Fellows Research/Policy Papers Publications Events Data Articles Site Map Technical Issues Coontact Discussion Papers Search Close HomeArticlesFellows' WorksArtificial Intelligence and Society: Philosophy of Fallibility Artificial Intelligence and Society: Philosophy of FallibilityPart 23: Will Superhumans Eradicate Ordinary Human Beings? Print KOBAYASHI Keiichiro Faculty Fellow, RIETI The idea that “everything is fallible” is the only theory that could be an “infallible truth.” Only a comprehensive doctrine premised on the principle of fallibility can keep the moral value \(q_{t}\) of the social system positive forever. Therefore, we believe that societal ideals that should be societally maintained must be premised on the principle of fallibility. That idea applies not only to ordinary human beings but also to superhumans, or human beings whose intellectual power has been enhanced by AI. Let us assume that ordinary human beings and superhumans enhanced by AI and biotechnology have been divided into two separate social classes. In that case, superhumans, too, are aware of their own fallibility. Despite being enhanced by AI, they would understand reality through nothing more than “approximate calculations” but would be unable to “truly” understand “everything.” Pattern identification based on deep learning is also a form of approximate calculation using prepared sets of real-world patterns. Superhumans, too, would understand that all intellectual activities represent an accumulation of approximate calculations. Superhumans who are aware of their own fallibility are expected to create a society that is tolerant of activities that are freely conducted by a great variety of beings (including ordinary human beings). If they are aware of their own fallibility, they are certain to recognize the possibility that innovations brought about by other people (including ordinary human beings) could have a significant impact on themselves. If interactions that could occur between unforeseen innovations are taken into consideration, from a superhumans’ point of view, respecting the continued existence of ordinary human beings, rather than wiping them out (or letting them wither into extinction), would be the most beneficial and rational decision for purely selfish reasons (Note 1). The vision of a diverse and tolerant society premised on the principle of fallibility is nothing more than what we imagine within the limits of our thinking. One problem for me is this: when thinking about a future society in which co-existence with AI is inevitable, how far will I, a mere ordinary human being, be able to follow the reasoning of AI (and that of superhumans whose intellectual power has been enhanced by AI), which is expected to transcend the current human understanding? Of course, the possibility cannot be ruled out that superhumans, by following some line of reasoning that is beyond the author’s understanding, will arrive at the conclusion that ordinary human beings should be exploited or eradicated. Even so, there is one thing we can say for sure. At the least, believing in the fallibility of any bleak vision of future society remains an option for us—that is, we can choose to believe that it may be wrong to assume that superhumans will eradicate mankind. In this case, fallibility is another name for hope. Footnote(s) ^ The logic mentioned here applies not only to the relationship between superhumans and ordinary human beings but also to the relationship between superhumans and other animals and plants. Even though intellectual activity may be the exclusive domain of Homo sapiens, by respecting biodiversity, ordinary human beings and superhumans can expect to benefit in various ways, including from resources generated through the activities of the diverse assortment of beings on the planet (e.g., drug ingredients, useful chemical substances, and raw materials). Given this expectation, even if superhumans act entirely selfishly, they are certain to consider respecting biodiversity to be a reasonable decision. This is exactly the same logic as the one applied to the relationship between superhumans and ordinary human beings that was explained in the main text. September 21, 2023 Print Article(s) by this author Has Policy Succeeded in Influencing Expectations? March 28, 2024[Newspapers & Magazines] Artificial Intelligence and Society: Philosophy of Fallibility November 10, 2023[RIETI Report] Artificial Intelligence and Society: Philosophy of FallibilityColumn: Philosophy of Fallibility and Pragmatism October 23, 2023[Artificial Intelligence and Society: Philosophy of Fallibility] Artificial Intelligence and Society: Philosophy of FallibilityPart 23: Will Superhumans Eradicate Ordinary Human Beings? September 21, 2023[Artificial Intelligence and Society: Philosophy of Fallibility] Artificial Intelligence and Society: Philosophy of FallibilityPart 22: Fallibility and Freedom August 21, 2023[Artificial Intelligence and Society: Philosophy of Fallibility] Articles Column Special Series Newspapers & Magazines Fellows' Works Artificial Intelligence and Society: Philosophy of Fallibility East Asian Economic Strategies Research Notes on Spatial Economies Exploring the Global Financial Information Superhighway Kobayashi-sensei's Economic Research Picks On Governance and Leadership China in Transition Economics Review Digital Convergence Forum Revising Foreign Policy Interview Series: Economic Policy ExPost Evaluation of 2002 FIFA World Cup Korea / Japan Social System Design Workshop VoxEU Column From IZA Perspectives from Around the World Other Contents RIETI Report Policy Update Keizai Sangyo Journal Communications Newsletter RSS Feed Facebook X YouTube Research Areas Research Programs (FY2024-2028) Research Programs (FY2020-2023) Research Programs (FY2016-2019) Research Programs (FY2011-2015) Policy Research Domains (FY2006-2010) Projects Survey Fellows Research/Policy Papers Discussion Papers (English) Discussion Papers (Japanese) Policy Discussion Papers (English) Policy Discussion Papers (Japanese) Technical Papers (English) Technical Papers (Japanese) Non Technical Summaries List of Articles in Journals Research Digest Discussion Papers Search Publications RIETI Books (English) RIETI Books (Japanese) History of Japan's Trade and Industry Policy Authors' Words Other Publications (English) Other Publications (Japanese) Events Symposiums Workshops BBL Seminars Archived Seminar Series Data JIP Database R-JIP Database CIP Database Industry-Specific Nominal and Real Effective Exchange Rates AMU and AMU Deviation Indicators JSTAR RIETI-TID RIETI FDI Database ICPA Project Links Articles Column Special Series Newspapers & Magazines Fellows' Works VoxEU Column From IZA Perspectives from Around the World Other Contents RIETI Report Policy Update Keizai Sangyo Journal (METI Journal) About RIETI Privacy Policy Site Policy Site Map Help Contact METI Web Site Research Institute of Economy, Trade and Industry, IAA (JCN 6010005005426)JCN: Japan Corporate Number Opinions expressed or implied on this website are solely those of the author, and do not necessarily represent the views of the Research Institute of Economy, Trade and Industry (RIETI).Titles, numbers, specific names, etc. on this website are as of the date of publication. In the case of reposting material from our website, contact us beforehand. Top

競艇グランプリ仕組み パチンコエメラルド beebet詐欺 payカジ
Copyright ©テキサスホールデム戦略 The Paper All rights reserved.