My favorite subredit is retrofuturism. Every day I spend significant time reading and thinking about what people of the past imagined the future would be like. In my opinion, most of these exotic, complex, Rube Goldberg doom and gloom scenarios have nothing on good old fashioned nuclear war and climate change. I remember reading the dystopic futures of Neuromancer and Snow Crash in the 1990's. At the time, people genuinely believed that crap was right around the corner. Less than 20 years later, it all possesses a laugh out loud level of anacronism and quaintness. When the word "robot" was invented, people were already worrying about them taking over. True AI is a pipe dream we're about as close as in 1920. And "grey goo", relatively recent idea which is obliquely referee to in one of your quotes seems like a fever dream of the past. Also, Kurtzweil is a loon. He possesses an unassailable vision of the future so rigidly fixed, that it approaches a messianic cult level of false certainty. He is immune to any new data at all. Edit - just realized the article is 15 years old, which explains a lot. The paranoid fears of 15 years ago belong to 15 years ago. In 15 years people will be passing the article to friends with the future equivalent of "Where's my flying car?" jokes.
Reading this in chunks, this assumes that the elite (skynet) has the same needs and wants that we do (resources, livable potable land, environmental constraints from acquiring both). Otherwise, the the most "humane" and "liberal" solution would by building a spaceship (or some other medium) and blasting off into a higher plane of existence beyond meatspace.
My job is to take out overly intelligent creations melded of genetically engineered flesh and the pinnacle of artificial intelligence, creations far stronger than I am and operating at a fever pitch. Yet they don't frighten me. Why? I'm a crafty son of a b****, that's why. I prefer Issac Asimov to James Cameron.
I thought this was an interesting article to throw into the mix. http://www.theguardian.com/environm...1-of-amphibians-set-to-go-the-way-of-the-dodo
Well yeah, the original premise is that technology and innovation will doom us all. The last post is that humanity's overconsumption will doom us all. Some fears of a resource crunch bubbled up in the mainstream during the 70s, but never came to be because of...technology and innovation. Lather, rinse, repeat What exactly is the answer then? I think my replies have been consistent if you apply it two or three steps down the line. If the environment adjusts and we're not completely eradicated from it, then "worse" becomes a subjective argument. How does now compare to the time when we were one red button away from mutually assured destruction? How does the possible future compare to living in central Europe during the early 20th century? Another question to pose is what exactly does "the future" mean? Or what constitutes as progress? Different answers for different people, but finding a common one would certainly shed light on what's for better or worse.
Worse? Than what? or better for that matter. To make qualitative judgments you need metrics. The Universe, evolution and entropy don't judge, they just change. Human beings need essentials, desire creature comforts, want to avoid pain and sickness and live to propagate. We are probably going to get better at those things.
The original premise is that we have to weigh the use of disruptive technology, especially that which has to be pushed forward to serve consumption patterns which may be unsustainable. It's why I specifically highlighted these passages: "Given the incredible power of genetic engineering, it's no surprise that there are significant safety issues in its use. My friend Amory Lovins recently cowrote, along with Hunter Lovins, an editorial that provides an ecological view of some of these dangers. Among their concerns: that "the new botany aligns the development of plants with their economic, not evolutionary, success." and "The dream of robotics is, first, that intelligent machines can do our work for us, allowing us lives of leisure, restoring us to Eden." This article drives straight at the point of whether or not there will be a better future, and helps caveat a critical factor in the discussion between the usage of potentially disruptive (for better or for worse) technology--by emphasizing a driving need for that (overconsumption) and the reality that conditions may have to dictate drastic solutions (such as a completely reengineered set of fauna and flora for warmer temperature conditions). Of course the reason why these technologies will be needed is a) humanity's inability up to now to create a sustainable culture. In a few hundred years, we will have undone millions of years of genetic diversity, to our peril. b) Our dream of "going back to Eden", a land that never existed and perhaps never should if the technologies that power are too dangerous. To tie it all together, I think that the reason why we require so many disruptive and potentially dangerous technologies has more to do with rapacious needs than well-considered progress. Science has had to be accelerated several times without sufficient checks (atom bomb, bio-weapons, genetic engineering, fracking) because of it, and I anticipate that to get worse now that the "resource base" of the Earth is declining--which implies a space-faring civilization being the only route of escape. We are also, at certain times, not focusing on the right science because of this. So the future to me, the time after this present, may well be worse based on that article. I still remain an optimist by and large, but I thought that was well worth considering.
Worse in terms of biological diversity and from a human point of view, sustainability and our ability to survive on this planet.
You understand that the reason why ecosystems and species will continue to disappear is not because of increased technology or rapacious individual need, but rather the amount of people living on this world can't sustain a European/Japanese, let alone American lifestyle? It's more about the sheer scope of human numbers rather than any invention or man made doomsday device.
It's a very important question whether we can actually control or technology or if it just makes things so chaotic and through unintended consequences destroy us. We've already seen many examples where our technology has already caused us damage through environmental catastrophe or through rapid stock market swings from computerized trading. One hope that I have is that we have been thinking about these issues for a long time through people like Ray Kurzweil and visionary sci-fi writers like Aasimov and Arther C. Clarke. So we already have some ideas about how to deal with these issues. Anyway this issue isn't one that can be discussed simply and will try to post more later.
To muddy the waters again, this makes me think the world will be better. http://www.wired.com/2014/10/future-of-artificial-intelligence/ incidentally, having met the Watson MEA team, I can say that these guys know their s**t. i'd entrust the future with them.