Menu

Blog

Archive for the ‘defense’ category

Jun 12, 2020

DARPA, Biotech, and Human Enhancement — ideaXme — Dr. Eric Van Gieson — Biological Technologies Office (BTO) Epigenetic CHaracterization and Observation (ECHO) Program — Ira Pastor

Posted by in categories: aging, bioengineering, biotech/medical, defense, DNA, genetics, government, health, life extension, military

Feb 8, 2020

Bio-Security — Dr. Tara O’Toole MD, EVP and Senior Fellow at In-Q-Tel, director of B.Next, former Under Secretary for the Science and Technology Directorate at the U.S. Department of Homeland Security (DHS) — ideaXme — Ira Pastor

Posted by in categories: aging, biological, biotech/medical, defense, DNA, genetics, government, health, life extension, science

Jan 5, 2020

Fighting Ebola and other Highly Hazardous Pathogens In A Hot Zone! — Colonel (ret) Dr. Mark Kortepeter, MD, MPH — ideaXme — Ira Pastor

Posted by in categories: aging, bioengineering, biological, biotech/medical, defense, genetics, health, life extension, military, posthumanism, science

Jul 5, 2019

Dr. Steffanie Strathdee PhD. — UCSD Center for Innovative Phage Applications and Therapeutics (IPATH) — ideaXme — Ira Pastor

Posted by in categories: aging, bioengineering, biotech/medical, counterterrorism, defense, disruptive technology, existential risks, genetics, health, life extension

May 21, 2019

Commander (ret) Dr. Luis Alvarez, Director of Organ Manufacturing, United Therapeutics, and Co-Founder of GDF11 Harvard spin-out Elevian and MIT spin-out Theradaptive — ideaXme Show — Ira Pastor

Posted by in categories: aging, bioengineering, biotech/medical, business, defense, DNA, health, life extension, military, science

Feb 13, 2019

Could Mosquitos be more friend than foe?

Posted by in categories: aging, bees, biological, biotech/medical, defense, genetics, health, life extension, neuroscience, science

Jun 9, 2017

Startup Societies Summit: A Decentralized Governance Trade Show

Posted by in categories: bitcoin, business, cryptocurrencies, defense, economics, futurism, geopolitics, governance, government

Lifeboat Foundation readers are aware that the world has become progressively more chaotic. Part of the danger comes from centralized points of failure. While large institutions can bear great stress, they also cause more harm when they fail. Because there are so few pillars, if one collapses, the whole system is destroyed.

For instance, prior to the federal reserve system, bank runs we extremely common. However, since the financial system consisted of small, competing institutions, failure was confined to deficient banks. So while failure was frequent, it was less impactful and systemic. In contrast, after the establishment of the federal reserve, banks became fewer and larger. Failures, while more infrequent, were large scale catastrophes when they occurred. They affected the whole economy and had longer impact.

This is even more important in political systems, which are the foundation of how a society operates. In order to have a more robust, antifragile social order, systems must be decentralized. Rather than a monopolistic, static political order, there must be a series of decentralized experiments. While failures are inevitable, it can be localized to these small experiments rather than the whole structure.

Continue reading “Startup Societies Summit: A Decentralized Governance Trade Show” »

May 12, 2016

Pentagon Turns to Silicon Valley for Edge in Artificial Intelligence — By John Markoff | The New York Times

Posted by in categories: defense, military

“In its quest to maintain a United States military advantage, the Pentagon is aggressively turning to Silicon Valley’s hottest technology — artificial intelligence.”

Read more

Mar 18, 2016

Who’s Afraid of Existential Risk? Or, Why It’s Time to Bring the Cold War out of the Cold

Posted by in categories: defense, disruptive technology, economics, existential risks, governance, innovation, military, philosophy, policy, robotics/AI, strategy, theory, transhumanism

At least in public relations terms, transhumanism is a house divided against itself. On the one hand, there are the ingenious efforts of Zoltan Istvan – in the guise of an ongoing US presidential bid — to promote an upbeat image of the movement by focusing on human life extension and other tech-based forms of empowerment that might appeal to ordinary voters. On the other hand, there is transhumanism’s image in the ‘serious’ mainstream media, which is currently dominated by Nick Bostrom’s warnings of a superintelligence-based apocalypse. The smart machines will eat not only our jobs but eat us as well, if we don’t introduce enough security measures.

Of course, as a founder of contemporary transhumanism, Bostrom does not wish to stop artificial intelligence research, and he ultimately believes that we can prevent worst case scenarios if we act now. Thus, we see a growing trade in the management of ‘existential risks’, which focusses on how we might prevent if not predict any such tech-based species-annihilating prospects. Nevertheless, this turn of events has made some observers reasonably wonder whether indeed it might not be better simply to put a halt to artificial intelligence research altogether. As a result, the precautionary principle, previously invoked in the context of environmental and health policy, has been given a new lease on life as generalized world-view.

The idea of ‘existential risk’ capitalizes on the prospect of a very unlikely event that, were it to pass, would be extremely catastrophic for the human condition. Thus, the high value of the outcome psychologically counterbalances its low probability. It’s a bit like Pascal’s wager, whereby the potentially negative consequences of you not believing in God – to wit, eternal damnation — rationally compels you to believe in God, despite your instinctive doubts about the deity’s existence.

However, this line of reasoning underestimates both the weakness and the strength of human intelligence. On the one hand, we’re not so powerful as to create a ‘weapon of mass destruction’, however defined, that could annihilate all of humanity; on the other, we’re not so weak as to be unable to recover from whatever errors of design or judgement that might be committed in the normal advance of science and technology in the human life-world. I make this point not to counsel complacency but to question whether ‘existential risk’ is really the high concept that it is cracked up to be. I don’t believe it is.

Continue reading “Who's Afraid of Existential Risk? Or, Why It's Time to Bring the Cold War out of the Cold” »

Oct 20, 2015

Drone ‘Angst’ extends beyond backyard spying

Posted by in categories: automation, counterterrorism, defense, disruptive technology, drones, ethics, military, privacy, surveillance

http://aviationweek.com/defense/drone-angst-extends-beyond-backyard-spying

Page 1 of 2012345678Last