-------------------- YOULIBR - AI Prevails: How to Keep Yourself and Humanity Safe R. Simpson, James PDF July 27, 2021 BOOKS pdf-epub-ai-prevails-how-to-keep-yourself-and-humanity-safe-download-books-youlibr
BOOKS - AI Prevails: How to Keep Yourself and Humanity Safe
AI Prevails: How to Keep Yourself and Humanity Safe - R. Simpson, James July 27, 2021 PDF  BOOKS
US $6.83

Views
653697
AI Prevails: How to Keep Yourself and Humanity Safe
Author: R. Simpson, James
Year: July 27, 2021
Format: PDF
File size: PDF 1.9 MB
Language: English

The world is very different now. For man holds in his mortal hands the power to abolish all forms of human poverty and all forms of human life. - John F. Kennedy In this book I reveal a vision of the future that will keep you up at night. Researchers all over the world are racing to develop technologies using Artificial Intelligence (AI). Some in the scientific community are engaged in the study and development of superintelligence. If they are successful in that endeavor, the outcome would be that AI would become smarter than their creators. We have all seen sci-fi movies that imagine hellish, apocalyptic futures where AI destroys our world. What if it comes on gradually-a drumbeat growing louder every day, until one day the defining roar overtakes everything? What if that day is not very far away? Join me as I started searching for keys about who and what is behind research and development of these technologies. Was it a cabal, which is a secret political clique or faction? Was it the technocrats, a member of a powerful technical elite or someone who advocates the supremacy of technical experts? Was it the plutocracy, formally known as government by the wealthy? Or was it our government? Are those around the world who develop technologies that will dramatically affect our destinies truly concerned about life satisfaction, quality of life, or whatever else happiness is termed? Should we simply internalize a life of uncontrolled artificial intelligence and bend to prophet-like futurists' scenarios? Critically, a plausible risk is that unenhanced humans could become superfluous as soon as the 2050s.