Detener los experimentos gigantes de IA: Carta abierta
Presentación
WikicharliE Patrimonio de Chile
En Ingles
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.
Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5] We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.
We have prepared some FAQs in response to questions and discussion in the media and elsewhere. You can find them here.
Español
Detener los experimentos gigantes de IA: Carta abierta
Hacemos un llamado a todos los laboratorios de IA para que detengan de inmediato, durante al menos seis meses, el entrenamiento de sistemas de IA más potentes que GPT-4.
Los sistemas de IA con inteligencia competitiva con la humana pueden representar profundos riesgos para la sociedad y la humanidad, como lo demuestra una amplia investigación[1] y reconocen los principales laboratorios de IA[2]. Como se afirma en los ampliamente respaldados Principios de IA de Asilomar, la IA avanzada podría representar un cambio profundo en la historia de la vida en la Tierra y debe planificarse y gestionarse con el cuidado y los recursos adecuados. Desafortunadamente, este nivel de planificación y gestión no se está produciendo, a pesar de que en los últimos meses los laboratorios de IA se han visto envueltos en una carrera descontrolada por desarrollar e implementar mentes digitales cada vez más potentes que nadie, ni siquiera sus creadores, puede comprender, predecir ni controlar de forma fiable.
Los sistemas de IA contemporáneos se están volviendo competitivos con la humana en tareas generales, y debemos preguntarnos: ¿Deberíamos permitir que las máquinas inunden nuestros canales de información con propaganda y falsedades? ¿Deberíamos automatizar todos los trabajos, incluidos los que nos satisfacen? ¿Deberíamos desarrollar mentes no humanas que eventualmente podrían superarnos en número, ser más inteligentes, quedar obsoletas y reemplazarnos? ¿Deberíamos arriesgarnos a perder el control de nuestra civilización?
Estas decisiones no deben delegarse en líderes tecnológicos no electos. Los sistemas de IA potentes solo deben desarrollarse cuando tengamos confianza en que sus efectos serán positivos y sus riesgos, manejables. Esta confianza debe estar bien justificada y aumentar con la magnitud de los efectos potenciales de un sistema. La reciente declaración de OpenAI sobre la inteligencia artificial general afirma: «En algún momento, puede ser importante obtener una revisión independiente antes de comenzar a entrenar sistemas futuros, y que los esfuerzos más avanzados acuerden limitar la tasa de crecimiento de la computación utilizada para crear nuevos modelos». Estamos de acuerdo. Ese momento es ahora.
Por lo tanto, hacemos un llamado a todos los laboratorios de IA para que suspendan de inmediato, durante al menos seis meses, el entrenamiento de sistemas de IA más potentes que GPT-4. Esta pausa debe ser pública y verificable, e incluir a todos los actores clave. Si dicha pausa no puede implementarse rápidamente, los gobiernos deberían intervenir e instaurar una moratoria.
Los laboratorios de IA y los expertos independientes deberían aprovechar esta pausa para desarrollar e implementar conjuntamente un conjunto de protocolos de seguridad compartidos para el diseño y desarrollo de IA avanzada, rigurosamente auditados y supervisados por expertos externos independientes. Estos protocolos deberían garantizar que los sistemas que los cumplen sean seguros más allá de toda duda razonable.[4] Esto no implica una pausa en el desarrollo de la IA en general, sino simplemente un retroceso en la peligrosa carrera hacia modelos de caja negra cada vez más grandes e impredecibles con capacidades emergentes.
La investigación y el desarrollo de la IA deberían reorientarse para que los potentes y modernos sistemas actuales sean más precisos, seguros, interpretables, transparentes, robustos, alineados, fiables y leales.
Paralelamente, los desarrolladores de IA deben colaborar con los responsables políticos para acelerar drásticamente el desarrollo de sistemas robustos de gobernanza de la IA. Estos deberían incluir, como mínimo: nuevas y competentes autoridades reguladoras dedicadas a la IA; supervisión y seguimiento de sistemas de IA de alta capacidad y grandes conjuntos de capacidades computacionales; sistemas de procedencia y marcas de agua para ayudar a distinguir lo real de lo sintético y rastrear las fugas de los modelos; un sólido ecosistema de auditoría y certificación; Responsabilidad por los daños causados por la IA; financiación pública sólida para la investigación técnica sobre seguridad de la IA; e instituciones con recursos suficientes para afrontar las drásticas perturbaciones económicas y políticas (especialmente para la democracia) que la IA causará.
La humanidad puede disfrutar de un futuro próspero con la IA. Tras haber logrado crear potentes sistemas de IA, ahora podemos disfrutar de un "verano de la IA" en el que cosecharemos los frutos, diseñaremos estos sistemas para el claro beneficio de todos y daremos a la sociedad la oportunidad de adaptarse. La sociedad ha hecho una pausa en otras tecnologías con efectos potencialmente catastróficos para la sociedad.[5] Podemos hacerlo aquí. Disfrutemos de un largo verano de la IA, no nos precipitemos sin preparación hacia un otoño.
Firmantes (algunos)
Firmantes
Yoshua BengioFounder and Scientific Director at Mila, Turing Prize winner and professor at University of Montreal
Stuart RussellBerkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach"
Elon MuskCEO of SpaceX, Tesla & Twitter
Steve WozniakCo-founder, Apple
Yuval Noah HarariAuthor and Professor, Hebrew University of Jerusalem
Emad MostaqueCEO, Stability AI
Andrew YangForward Party, Co-Chair, Presidential Candidate 2020, NYT Bestselling Author, Presidential Ambassador of Global Entrepreneurship
John J HopfieldPrinceton University, Professor Emeritus, inventor of associative neural networks
Valerie PisanoPresident & CEO, MILA
Connor LeahyCEO, Conjecture
Jaan TallinnCo-Founder of Skype, Centre for the Study of Existential Risk, Future of Life Institute
Evan SharpCo-Founder, Pinterest
Chris LarsenCo-Founder, Ripple
Craig PetersCEO, Getty Images
Max TegmarkMIT Center for Artificial Intelligence & Fundamental Interactions, Professor of Physics, president of Future of Life Institute
Anthony AguirreUniversity of California, Santa Cruz, Executive Director of Future of Life Institute, Professor of Physics
Sean O'HeigeartaighExecutive Director, Cambridge Centre for the Study of Existential Risk
Tristan HarrisExecutive Director, Center for Humane Technology
Rachel BronsonPresident, Bulletin of the Atomic Scientists
Danielle AllenProfessor, Harvard University; Director, Edmond and Lily Safra Center for Ethics
Marc RotenbergCenter for AI and Digital Policy, President
Nico MiailheThe Future Society (TFS), Founder and President
Nate SoaresMIRI, Executive Director
Andrew CritchAI Research Scientist, UC Berkeley. CEO, Encultured AI, PBC. Founder and President, Berkeley Existential Risk Initiative.
Mark NitzbergCenter for Human-Compatible AI, UC Berkeley, Executive Directer
Yi ZengInstitute of Automation, Chinese Academy of Sciences, Professor and Director, Brain-inspired Cognitive Intelligence Lab, International Research Center for AI Ethics and Governance, Lead Drafter of Beijing AI Principles
Steve OmohundroBeneficial AI Research, CEO
Meia Chita-TegmarkCo-Founder, Future of Life Institute
Victoria KrakovnaDeepMind, Research Scientist, co-founder of Future of Life Institute
Emilia JavorskyPhysician-Scientist & Director, Future of Life Institute
Mark BrakelDirector of Policy, Future of Life Institute
Aza RaskinCenter for Humane Technology / Earth Species Project, Cofounder, National Geographic Explorer, WEF Global AI Council
Gary MarcusNew York University, AI researcher, Professor Emeritus
Vincent ConitzerCarnegie Mellon University and University of Oxford, Professor of Computer Science, Director of Foundations of Cooperative AI Lab, Head of Technical AI Engagement at the Institute for Ethics in AI, Presidential Early Career Award in Science and Engineering, Computers and Thought Award, Social Choice and Welfare Prize, Guggenheim Fellow, Sloan Fellow, ACM Fellow, AAAI Fellow, ACM/SIGAI Autonomous Agents Research Award
Huw PriceUniversity of Cambridge, Emeritus Bertrand Russell Professor of Philosophy, FBA, FAHA, co-foundor of the Cambridge Centre for Existential Risk
Zachary KentonDeepMind, Senior Research Scientist
Ramana KumarDeepMind, Research Scientist
Jeff Orlowski-YangThe Social Dilemma, Director, Three-time Emmy Award Winning Filmmaker
Olle HäggströmChalmers University of Technology, Professor of mathematical statistics, Member, Royal Swedish Academy of Science
Michael OsborneUniversity of Oxford, Professor of Machine Learning
Raja ChatilaSorbonne University, Paris, Professor Emeritus AI, Robotics and Technology Ethics, Fellow, IEEE
Moshe VardiRice University, University Professor, US National Academy of Science, US National Academy of Engineering, American Academy of Arts and Sciences
Adam SmithBoston University, Professor of Computer Science, Gödel Prize, Kanellakis Prize, Fellow of the ACM
Daron AcemogluMIT, professor of Economics, Nemmers Prize in Economics, John Bates Clark Medal, and fellow of National Academy of Sciences, American Academy of Arts and Sciences, British Academy, American Philosophical Society, Turkish Academy of Sciences.
Christof KochMindScope Program, Allen Institute, Seattle, Chief Scientist
Marco VenutiDirector, Thales group
Gaia DempseyMetaculus, CEO, Schmidt Futures Innovation Fellow
Henry ElkusFounder & CEO: Helena
Gaétan Marceau CaronMILA, Quebec AI Institute, Director, Applied Research Team
Peter AsaroThe New School, Associate Professor and Director of Media Studies
Jose H. OralloTechnical University of Valencia, Leverhulme Centre for the Future of Intelligence, Centre for the Study of Existential Risk, Professor, EurAI Fellow
George DysonUnafilliated, Author of "Darwin Among the Machines" (1997), "Turing's Cathedral" (2012), "Analogia: The Emergence of Technology beyond Programmable Control" (2020).
Nick HayEncultured AI, Co-founder
Shahar AvinCentre for the Study of Existential Risk, University of Cambridge, Senior Research Associate
Solon AngelAI Entrepreneur, Forbes, World Economic Forum Recognized
Gillian HadfieldUniversity of Toronto, Schwartz Reisman Institute for Technology and Society, Professor and Director
Erik HoelTufts University, Professor, author, scientist, Forbes 30 Under 30 in science
Kate JeromeChildren's Book Author/ Cofounder Little Bridges, Award-winning children's book author, C-suite publishing executive, and intergenerational thought-leader
Ian HogarthCo-author State of AI Report
Bart SelmanBart Selman Cornell, Professor of Computer Science, past president of AAAI
Tom GruberSiri/Apple, Humanistic.AI, Co-founder, CTO, Led the team that designed Siri, co-founder of 4 companies
Robert BrandenbergerMcGill University, Professor of Physics
Alfonso NganHong Kong University, Chair in Materials Science and Engineering
J.M.Don MacElroyUniversity College Dublin, Emeritus Chair of Chemical Engineering
Lawrence M. KraussPresident, The Origins Project Foundation
Michael WellmanUniversity of Michigan, Professor and Chair of Computer Science & Engineering
Berndt MuellerDuke University, J.B. Duke Professor of Physics
Alan MackworthUniversity of British Columbia, Professor Emeritus of Computer Science
Grady BoochACM Fellow, IEEE Fellow, IEEE Computing Pioneer, IBM Fellow
Rolf Harald BaayenUniversity of Tuebingen, Professor
Tor NordamNTNU, Adjunct associate professor of physics,
Joshua David GreeneHarvard University, Professor,
Arturo GiraldezUniversity of the Pacific, Professor
Scott NiekumUniversity of Massachusetts Amherst, Associate Professor
Lars KotthoffUniversity of Wyoming, Assistant Professor, Senior Member, AAAI and ACM
Steve PetersenNiagara University, Associate Professor of Philosophy
Yves DevilleUCLouvain, Professor of Computer Science
Christoph WenigerUniversity of Amsterdam, Associate Professor for Theoretical Physics
Luc SteelsUniversity of Brussels (VUB) Artificial Intelligence Laboratory, emeritus professor and founding director, EURAI Distinguished Service Award, Chair for Natural Science of the Royal Flemish Academy of Belgium
Robert KowalskiDepartment of Computing, Imperial College London, Professor Emeritus and Distinguished Research Fellow, IJCAI Award for Research Excellence
Roman YampolskiyProfessor
Alyssa M VanceBlue Rose Research, Senior Data Scientist
Jonathan MorenoUniversity of Pennsylvania, David and Lyn Silfen University Professor, Member, National Academy of Medicine
Andrew BartoUniversity of Massachusetts Amherst, Professor emeritus, Fellow AAAS, Fellow IEEE
Constantin JorelUniversity of Caen, Assitant professor,
Paul RosenbloomUniversity of Southern California, Professor Emeritus of Computer Science, Fellow of the American Association for the Advancement of Science, the Association for the Advancement of Artificial Intelligence, and the Cognitive Science Society
Michael GillingsMacquarie University, Professor of Molecular Evolution,
Geoffrey OdlumOdlum Global Strategies , President , Retired U.S. Diplomat
Benjamin KuipersUniversity of Michigan, Professor of Computer Science, Fellow, AAAI, IEEE, AAAS
Chi-yuen WangUC Berkeley, Professor Emeritus,
Johann RohwerStellenbosch University, Professor of Systems Biology
Dana S. NauUniversity of Maryland, Professor, Computer Science Dept and Institute for Systems Research, AAAI Fellow, ACM Fellow, AAAS Fellow
Grigorios TsoumakasAristotle University of Thessaloniki, Associate Professor
Peter B. ReinerUniversity of British Columbia, Professor of Neuroethics
Andrew FrancisWestern Sydney University, Professor of Mathematics
Vassilis P. PlagianakosUniversity of Thessaly, Greece, Professor of Computational Intelligence, Dean of the School of Science, University of Thessaly, Greece
Eleanor 'Nell' WatsonEthicsNet, Research Director, Chartered Fellow of BCS, The Chartered Institute for IT (FBCS). Fellow of the Institution of Analysts and Programmers (FIAP). Fellow of the Institute for Innovation and Knowledge Exchange (FIKE). Fellow of the Royal Society of Arts (FRSA). Fellow of the Royal Statistical Society (FRSS), Fellow of the Chartered Management Institute (FCMI), Fellow of the Linnaean Society (FLS). Certified Ethical Emerging Technologist (CEET). Senior Fellow The Atlantic Council. Senior Member IEEE (SMIEEE).
Stefan SintTrinity College Dublin, Associate Professor, School of Mathematics
Hector GeffnerRWTH Aachen University, Alexander von Humboldt Professor, Fellow AAAI, EurAI
Brendan McCaneUniversity of Otago, Professor
Kang G. ShinUniversity of Michigan, Professor, Fellow of IEEE and ACM, winner of the Hoam Engineering Prize
Miguel GregorkiewitzUniversity Siena, Italy, Professor
Marcus FreiNEXT. robotics GmbH & Co. KG, CEO, Member European DIGITAL SME Alliance FG AI, Advisory Board http://ciscproject.eu
Václav NevrlýVSB-Technical University of Ostrava, Faculty of Safety Engineering, Assistant Professor
Alan Frank Thomas WinfieldBristol Robotics Laboratory, UWE Bristol, UK, Professor of Robot Ethics
LuIs CairesNOVA University Lisbon, Professor of Computer Science and Head of NOVA Laboratory for Computer Science and Informatics
Vincent CorrubleSorbonne University, Associate Professor of Computer Science
Thomas SoiferCalifornia Institute of Technology, Harold Brown Professor of Physics, Emeritus, NASA Distinguished Public Service Medal, NASA Exceptional Scientific Achievement Medal
Sunyoung YangThe University of Arizona, Assistant Professor
The Anh hanTeesside University , Professor of Computer Science, Lead of Centre for Digital Innovation
Yngve SundbladKTH, Royal Institute of Technology, Stockholm, Professor emeritus
Courtney M. PetersonUniversity of Alabama at Birmingham, Associate Professor
Marco DorigoUniversité Libre de Bruxelles, AI lab Research Director, AAAI Fellow; EurAI Fellow; IEEE Fellow; IEEE Frank Rosenblatt Award; Marie Curie Excellence Award
Domenico TaliaUniversity of Calabria, Professor
Divya SiddarthCollective Intelligence Project, Co-director
Timothy John O'DonnellMcGill University/Mila, Professor, Canada CIFAR AI Chair
Hans Martin SeipUniversity of Oslo, Professor Emeritus, Member of Norwegian Academy of Science and Letters and of The Royal Norwegian Society of Sciences and Letters
Kim MensProfessor of Computer Science, UCLouvain
Thomas WallisTechnical University of Darmstadt, Germany, Professor
Damian LyonsFordham University, Professor, SM IEEE
Dan HendrycksCenter for AI Safety, CEO
Jakob FoersterUniversity of Oxford, Associate Professor, Awarded ERC Starter Grant in 2023
Michael SymondsThe University of Nottingam, Emeritus Professor
Andrew RobinsonThe University of Melbourne, Professor
Tony J. PrescottUniversity of Sheffield, Professor of Cognitive Robotics
Robert BrooksUNSW Sydney, Scientia Professor
Zbigniew H. StachurskiAustralian National University, A/Prof. - retired
David Scott KruegerUniversity of Cambridge, Assistant Professor
Yoshihiko NakamuraUniversity of Tokyo, Senior Researcher / Professor Emeritus
Pablo Jarillo-HerreroMIT, Professor of Physics, Wolf Prize in Physics, US National Academy of Sciences
Raul MonroyTecnologico de Monterrey, Professor
Peter VamplewFederation University Australia, Professor of Information Technology
Jean-Claude LatombeComputer Science Department, Stanford University, Professor, Emeritus
Frank van den BoschYale University, Professor of Theoretical Astrophysics
Richard DazeleyDeakin University, Professor of Artificial Intelligence and Machine Learning
M V N MurthyFormer Professor at The Institute of Mathematical Sciences, Chennai, India, None, Fellow, Indian Academy of Sciences
Qiaobing XuTufts University, Professor of Biomedical Engineering, Fellow of AIMBE
Raymund SisonDe La Salle University, Professor and University Fellow, Metrobank Foundation Outstanding Teacher and NAST Outstanding Young Scientist
Jonathan CefaluPreamble, Inc., Chairman, Inventor of Prompt Injection; Forbes 30 Under 30 for inventing Snapchat Spectacles AR glasses
George HelouCalifornia Institute of Technology, Executive Director of IPAC at Caltech, Fellow, American Astronomical Society; NASA Distringuished Public Service Medal; Gruber Cosmology Prize (2018, shared)
Jacob TsimermanUniversity of Toronto, Professor of Mathematics, New horizons 2021 prize winner
Jeffrey LadishSecurity Researcher
Jaak TepandiTallinn University of Technology, Professor Emeritus of Knowledge-Based Systems, Recognition by ITU Secretary General Hamadoun I. Toure of Jaak Tepandi, Work Area Leader of the High Level Expert Group, for contribution towards a more secure and safer information society (January 2009)
Hema A MurthyIndian Institute of Technology Madras India, Professor, Fellow,Indian National Academy of Engineering; Fellow, Internattional Speech Communication Association
Richard Guy ComptonOxford University, Professor of Chemistry
Ulises CortésUniveristat Politècnica de Catalunya, Professor, Fellow Sociedad Mexicana de Inteligencia Artificial. Mexican of the year 2018
Robert BabuskaDelft University of Technology, Professor
Alexander SchützUniversity of Marburg, Professor of Experimental Psychology
Joan Manuel del Pozo AlvarezSpain, Prof. of Philisiphy, Exminister of Education and Universities
Miguel Angel Ducci, Chile, Writer, researcher and CEO
Albert SabaterUniversity of Girona, Director of the Catalan Observatory for Ethis in Artificial Intelligence
Gert JervanTallinn University of Technology, Professor of Dependable Computer Systems, Dean of School of Information Technologies
Gregory ProvanUniversity College Cork, Professor, Rhodes Scholar
Jordi Miralda-EscudeInstitut de Ciències del Cosmos, Universitat de Barcelona, ICREA, ICREA research professor
Frits VaandragerRadboud University, Head of the Department of Software Science
Nicholas TaylorHeriot-Watt University, Professor of Computer Science, Chartered Engineer, Chartered IT Professional, Chartered Mathematician, Fellow British Computer Society, Fellow Higher Education Academy, Member Institute of Mathematics and its Applications
Eduard Savador-SoléUniversity of Barcelona, Full Proessor
Tom LenaertsUniversité Libre de Bruxelles, Professor
Gerhard LakemeyerRWTH Aachen University, Professor, Fellow, European Association for Artificial Intelligence
Simeon CamposSaferAI, Founder
Stuart S. BlumeUnversity of Amsterdam, Emeritus Professor of Science & Technology Studies
Manel SanromàUniversitat Rovira i Virgili, Tarragona. Catalonia, Professor of Applied Mathematics. Founder of CIVICAi, Trustee Emeritus, Internet Society
Paolo ZucconTrento University, Italy, Associate Professor
Georgios GounalakisPhilipps-University Marburg, Professor of Law
João Emilio AlmeidaLIACC / CITECA / ISTEC Porto, Professor / Researcher, Senior and Specialist member of Portuguese Engineers Order
Jan Pieter van der SchaarInstitute of Physics, University of Amsterdam, Associate Professor
Oren SchuldinerWeizmann Institute of Science, Professor
Michel SchellekensUniversity College Cork, Professor, Fulbright Award
Martin WelkUMIT TIROL - Private University for Health Sciences and Health Technology, Hall in Tirol, Austria, Professor
Fuentes & referencias
- futureoflife.org//Pause Giant AI Experiments: An Open Letter
- El Ciudadano/ El peligro de la IA, Ducci nos advierte con el Etercuanticum.
- IA
- Virnauta
- Medio Virtual
- El peligro de la IA, Ducci nos advierte con el Etercuanticum
* 📌 Véase en WikicharliE Categoría "INTELIGENCIA ARTIFICIAL": https://s.wikicharlie.cl/b88 ☕
