1. Welcome! Please take a few seconds to create your free account to post threads, make some friends, remove a few ads while surfing and much more. ClutchFans has been bringing fans together to talk Houston Sports since 1996. Join us!

Tech leaders urge pause in 'out-of-control' AI race

Discussion in 'BBS Hangout: Debate & Discussion' started by Amiga, Apr 2, 2023.

  1. DonnyMost

    DonnyMost Member
    Supporting Member

    Joined:
    May 18, 2003
    Messages:
    48,988
    Likes Received:
    19,926
    Here's a fun quick video explaining this phenomenon.



    Some folks fail to grasp that our brains are not much different from this.

    It is an inevitability that machines will be generated that are practically indistinguishable from a human mind.

    Given enough time and computing power there's not much unique or unrepeatable about your emotions, creativity, etc.
     
    #21 DonnyMost, Apr 2, 2023
    Last edited: Apr 2, 2023
    cheke64 likes this.
  2. KingCheetah

    KingCheetah Atomic Playboy
    Supporting Member

    Joined:
    Jun 3, 2002
    Messages:
    59,079
    Likes Received:
    52,748
    TECHNOLOGICAL SINGULARITY

    The original version of this article was presented at the VISION-21 Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, March 30-31, 1993.

    1. What Is The Singularity?

    The acceleration of technological progress has been the central feature of this century. We are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater-than-human intelligence. Science may achieve this breakthrough by several means (and this is another reason for having confidence that the event will occur):

    Computers that are "awake" and superhumanly intelligent may be developed. (To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is "yes," then there is little doubt that more intelligent beings can be constructed shortly thereafter.)

    Large computer networks and their associated users may "wake up" as superhumanly intelligent entities.

    Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.

    Biological science may provide means to improve natural human intellect.

    The first three possibilities depend on improvements in computer hardware. Progress in hardware has followed an amazingly steady curve in the last few decades. Based on this trend, I believe that the creation of greater-than-human intelligence will occur during the next thirty years. (Charles Platt has pointed out that AI enthusiasts have been making claims like this for thirty years. Just so I'm not guilty of a relative-time ambiguity, let me be more specific: I'll be surprised if this event occurs before 2005 or after 2030.)

    https://frc.ri.cmu.edu/~hpm/book98/com.ch1/vinge.singularity.html
     
    Ubiquitin likes this.
  3. justtxyank

    justtxyank Member

    Joined:
    Jul 7, 2005
    Messages:
    42,901
    Likes Received:
    39,881
    I have no worry that AI is going to become self aware and seek to destroy humanity, but certainly fear that AI is going to disrupt society in devastating ways. I think we are talking multiple magnitudes above the industrial revolution. Humanity likely ends up in a great place in the future with AI, but for most of us and our kids it will be disastrous.
     
  4. justtxyank

    justtxyank Member

    Joined:
    Jul 7, 2005
    Messages:
    42,901
    Likes Received:
    39,881
    I have no worry that AI is going to become self aware and seek to destroy humanity, but certainly fear that AI is going to disrupt society in devastating ways. I think we are talking multiple magnitudes above the industrial revolution. Humanity likely ends up in a great place in the future with AI, but for most of us and our kids it will be disastrous.
     
  5. Amiga

    Amiga Member

    Joined:
    Sep 18, 2008
    Messages:
    25,096
    Likes Received:
    23,375
    None of that. Frankly, those examples are silly.

    It intuitively makes sense to me that a vastly "smarter" entity that humans cannot understand can destroy humanity. What I'm rejecting is the idea that it cannot happen (that it's hysteria to think so). I can go through a laundry list of possible scenarios, but really, why bother? (although AI experts have argued how this could happen - I can find a link later if you are seriously interested). When humans cannot understand something that is smarter than them, they cannot predict exactly what will happen.

    OpenAI recognizes this risk. There are a number of observable problems: AI has progressed faster than humans can understand it, and there is no hope of catching up. The gap will continue to grow. AI alignment, which involves aligning AI to human values, is also way behind AI progress and is recognized as extremely difficult. We don't get a second chance. If humans fail to align AI with human values and AI decides that humans are not necessary, it's over.

    Our approach to alignment research (openai.com)

    Unaligned AGI could pose substantial risks to humanity and solving the AGI alignment problem could be so difficult that it will require all of humanity to work together.
     
  6. Amiga

    Amiga Member

    Joined:
    Sep 18, 2008
    Messages:
    25,096
    Likes Received:
    23,375
  7. SamFisher

    SamFisher Member

    Joined:
    Apr 14, 2003
    Messages:
    61,826
    Likes Received:
    41,300
    The OpenAI ("open") people love to talk about existential, apocalyptic risk because it distracts from what they actually want to do which is displace the existing internet giant cartels with ... another winner take all market that they control
     
  8. DonnyMost

    DonnyMost Member
    Supporting Member

    Joined:
    May 18, 2003
    Messages:
    48,988
    Likes Received:
    19,926
    The unrest and suffering from the automation depression will definitely hit us before the apocalypse. We seem so unready and even unwilling to deal with it.
     
  9. Amiga

    Amiga Member

    Joined:
    Sep 18, 2008
    Messages:
    25,096
    Likes Received:
    23,375
    Part of why we are not ready is it’s moving faster than most have imagined and it’s practically impossible to slow it down.
     
  10. Sweet Lou 4 2

    Sweet Lou 4 2 Member

    Joined:
    Dec 16, 2007
    Messages:
    39,183
    Likes Received:
    20,334
    So you are saying that AI will make it harder for people to get jobs, and thus leave everyone in poverty?
     
  11. Sweet Lou 4 2

    Sweet Lou 4 2 Member

    Joined:
    Dec 16, 2007
    Messages:
    39,183
    Likes Received:
    20,334
    In order for AI to have even the potential to harm humanity, it requires far more than superhuman intelligence. It's going to need autonomy and access to resources, and the ability to shape those resources. That would be tremendous power and very unlikely would there be an AI that has that anytime soon. That is why the whole fear is silly. In fact, it's the fear of AI, and repressing it that would make it more dangerous since it pushes it into the underground.

    If you have a paperclip factory, and an AI runs the factory to take inputs in order to maximize output against inputs, the idea it will destroy humanity in order to avoid being turned off so it can't be stopped from making paper clips is the stuff from movies. Just as a factory manager doesn't have the power to do that, nor would an AI. It doesn't matter how intelligent of a machine you build, it's always will face physical limitations.

    Instead of fearing AI, people need to ask themselves, what is the likely use of AI? And when you look at it from that lens, you will realize that AI isn't going to destroy us. An AI vacuum cleaner isn't a threat to anyone. Nor is an AI that helps you with customer service problems, or acting as your assistant or therapist.

    Even if you built an AI soldier, which I think we're really really far away from, that's one entity. Play out actual scenarios and you will see there isn't one that is a credible threat.

    What this is really about is humanity not being able to deal with not being the smarted entity in the room. Not only is it inevitable, but it is necessary. The fact is it's humanity that isn't peaceful, it's humanity that has been destructive and dangerous. It's humanity that is irresponsible. I think it is inevitable that AI will be stewards of this planet, because humanity will not make it very far unless that happens. There are many ways life can end on this planet - from gamma ray bursts to a massive rock hitting this planet. The solutions to this problems, and many more, won't be solved by humans, but rather AI working with human scientists.

    Some people fear change and what the world will be. I see AI as our only pathway to getting out of the mess we're in.
     
  12. Sweet Lou 4 2

    Sweet Lou 4 2 Member

    Joined:
    Dec 16, 2007
    Messages:
    39,183
    Likes Received:
    20,334
    You don't need AI to spread sophisticated misinformation and manipulate society and politics. Humans and algo's are already really good at this.

    This is a fear you are stating. There's no evidence that this is the case. And putting a moratorium based on these fears to stop disinformation from being spread, also puts a moratorium and using AI to find ways to STOP the spread of misinformation, and advance science in ways that humans can not because of our limited brain capacity. At some point, humanity has to let go of the idea of control, that humanity doesn't have to be the greatest intelligence to survive or even thrive.

    The biggest mistake we could make would be to stop people from doing legitimate development of AI, and instead push it to other countries. That would put the US in an even more dangerous situation based on your logic and many others, it would constitute a national security risk. It also would put us at an economic disadvantage.

    You can not stop progress. You can't even slow it down. But yes, you can shape it. You can direct it. But a moratorium is the worst and most irrational of solutions.
     
    #32 Sweet Lou 4 2, Apr 2, 2023
    Last edited: Apr 2, 2023
  13. justtxyank

    justtxyank Member

    Joined:
    Jul 7, 2005
    Messages:
    42,901
    Likes Received:
    39,881
    An extremely simplified version, but essentially sure. I think it will disrupt and devastate the economic structure we have in place currently.
     
    rockbox likes this.
  14. Invisible Fan

    Invisible Fan Member

    Joined:
    Dec 5, 2001
    Messages:
    45,954
    Likes Received:
    28,048
    There's been weaknesses that have sprung up from generative AI that could be addressed such as 1) racial/ethnic bias based from the sources it pulls from as the people who design and maintain the AI, 2) the certainty it can present even if the information is false 3) the lack of human understanding with many of GPT-3 or Deep Learning's internal mechanisms. So I don't think the moratorium is stopping people from legitimate development, but rather a pause towards a mad dash towards the finish line and shifting that development towards less sexy "responsible work".

    That tech leaders called for it means they understand the startup culture of releasing with a Minimum Viable Product and then iterating around it, which we've more or less seen from GPT-3.

    GPT-4 offers far more capabilities ranging from pulling internet sources to parse ,construct, and summarize/improvise on the fly, generating images, and producing even more lucid and specialized responses to tailored prompts. It's far more sophisticated and offers a veneer of guard rails and a finished product when the moratorium signers aren't completely sure what type of Frankenstein's Monster they're unleashing towards the masses.

    The ability to generate images alludes to the possibility of deep fakes alongside tailored messages or copies sent to targeted groups. There will be yet another shift in our constructs of personal and social trust in this post-Truth era.

    I don't like the term "You can not stop progress." Progress towards what? People burnt down the Library of Alexandria and presumably set back centuries worth of human development. A lot of technical and scientific progress came from WW2 and the subsequent Cold War era likely catalyzed development for launching things into space. If the chance that AI could unintentionally cause undue misery and suffering due to political and economic stability from its work, then maybe 6 months of waiting isn't that big a deal.

    At this point we should carefully consider what problem AI is designed to solve because none of us fully understand its implications in of itself or how a peaceful application will transform society. It certainly isn't just another feature added onto a mobile phone and quiety forgotten or take for granted after a mode update.
     
    rocketsjudoka likes this.
  15. DonnyMost

    DonnyMost Member
    Supporting Member

    Joined:
    May 18, 2003
    Messages:
    48,988
    Likes Received:
    19,926
    Unless we prepare for the situation, very much yes.

     
    #35 DonnyMost, Apr 3, 2023
    Last edited: Apr 3, 2023
  16. rocketsjudoka

    rocketsjudoka Member

    Joined:
    Jul 24, 2007
    Messages:
    58,167
    Likes Received:
    48,334
    You’re stating the dangers of AI. Yes humanity isn’t peaceful and we’re the ones who built AI. The most benign view of AI is that it doesn’t act autonomously but just enable us to do more things. Given that humans arent peaceful that would make it possible for humans to do much more damage to themselves.

    the alternative that you present that humans have to accept not being the smartest beings in the planet or being in control presents even
    More danger. In that case even a well meaning AI could do a lot of damage to human society but trying to address the conflicting views of humans.

    AI empowered to inform us, make financial and other decisions for us at speeds much faster than we could likely will act in ways that we don’t understand. Those decisions will have profound affects on our society. Before we commit to this type of future I think at the minimum we need to have some ethics to address it rather than just say this is technological advancement for advancement sake.
     
  17. DonnyMost

    DonnyMost Member
    Supporting Member

    Joined:
    May 18, 2003
    Messages:
    48,988
    Likes Received:
    19,926
    I think the real reason is there's a lot of ignorance and denial around the subject.

    Moore's law has been around a while, so the pace isn't really surprising.
     
  18. Sweet Lou 4 2

    Sweet Lou 4 2 Member

    Joined:
    Dec 16, 2007
    Messages:
    39,183
    Likes Received:
    20,334
    The idea that because humans are not peaceful anything they create will not be is a false deduction. People conflict intelligence with power. In all these doomsday scenarios, I have yet to see how an AI would "do a lot of damage".

    The real idea is that if AI was smarter than us, AND we somehow gave it power over our lives, that we would lose control and AI would be our overlord. This to me is fear mongering.

    Again, trying to slow it down will only naturally select for its worst uses. Huge mistake, and very short sighted. There is also the serious flaw that we can intentionally create an AI. AI may simply be a powerful enough software in a powerful enough machine. In other words, there isn't a line you cross, rather it's a spectrum. AI already exist today, and they only will become more evident as their processing power and complexity increase.

    The train has already left the station. You can not stop this now. Not only is it likely impractical, it's probably illegal and unenforceable. To stop AI, you pretty much have to stop all neural networks and machine learning. That would wipe out huge chunks of our economy. Everything from recognition software to the things that optimize ad networks would need to be frozen.
     
  19. Sweet Lou 4 2

    Sweet Lou 4 2 Member

    Joined:
    Dec 16, 2007
    Messages:
    39,183
    Likes Received:
    20,334
    I disagree. Automation didn't result in massive unemployment around the world. Yes it will make certain skills obsolete. People will have to adjust. But if every job were replaced by AI, then there would not be enough money to support the use of the technology anyway. There's a balance in markets. And it will create new markets.

    Will it change society? Absolutely. Humanity has been marching towards the future and undergoing transformation. We're just at the beginning of that change. We can teach people to fear that change, and fight against it, or we can teach people to adopt and thrive in a new world. I subscribe to the latter.
     
  20. DonnyMost

    DonnyMost Member
    Supporting Member

    Joined:
    May 18, 2003
    Messages:
    48,988
    Likes Received:
    19,926
    You are taking a bold stance against a well-worn historical trend, then.

    There was a lot of violence and unrest as a result of the last two mechanized revolutions (the first being agrarian, the second being industrial).

    This was felt most at the bleeding edges, of course. Those further downstream of the changes have a longer time to pivot.

    But each revolution will be larger than the last in terms of displacement, and will come faster than its predecessor.

    We will always adapt, the problem is the adaptation period is very much not fun and often too long.
     

Share This Page