1. Welcome! Please take a few seconds to create your free account to post threads, make some friends, remove a few ads while surfing and much more. ClutchFans has been bringing fans together to talk Houston Sports since 1996. Join us!

Tech leaders urge pause in 'out-of-control' AI race

Discussion in 'BBS Hangout: Debate & Discussion' started by Amiga, Apr 2, 2023.

  1. rockbox

    rockbox Around before clutchcity.com

    Joined:
    Jul 28, 2000
    Messages:
    22,775
    Likes Received:
    12,529
    I think it will concentrate capital even further until there is enough civil unrest for a major revolution.
     
  2. London'sBurning

    Joined:
    Dec 5, 2002
    Messages:
    7,205
    Likes Received:
    4,817
    My understanding is the library was already in severe decline and a lot of the scrolls had eroded from simple wear and tear and pests. No printing press. Everything hand written on scrolls. You'd think being close to the coast line would really impact erosion of papyrus scrolls. Then with multiple passings of the torch through invasions, civil war and also peaceful transitions of power, not every leader of the city of Alexandria had the same vested interest in keeping the library in tip top condition as it's original founder.

    Plus there were other great libraries of the time scattered across the world that too suffered in decline over time indicating more of a sign of the times that the classical era was coming to an end. It's extremely unlikely there was anything of scientific significance in those scrolls that would have advanced say the discovery of Calculus some thousands of years earlier. Your best bet for that would have been Archimedes in recorded history.



    @StupidMoniker has multiple times over said the Civil War would have been unnecessary had the progress made from the Industrial Revolution continued as it was trending, making it a more cost effective business approach compared to owning human slaves at the time. While simply existing costs money to live, I doubt anyone stuck doing some menial repetitive task that can be automated while being treated subhuman would rather continue to do the thing that automation could better in place of them. Software devs and IT types with some chops in coding can automate daily tasks in a simpler format, cutting back potentially hours of time, instead doing it all manually. I'm quite confident were it as easy to automate tasks with some code for real life work involving other types of human labor, people would easily opt for that instead. It's just to automate say large scale agriculture with less need for human labor, costs a fortune in investment and probably a fortune to maintain once it's all set up; Something simply not in the wheelhouse of most people because of their income bracket.
     
    Invisible Fan likes this.
  3. Sweet Lou 4 2

    Sweet Lou 4 2 Member

    Joined:
    Dec 16, 2007
    Messages:
    39,183
    Likes Received:
    20,334
    We don't have a choice.

    And yeah, we adopt only when forced to at the very last moment. We still haven't adopted to climate change. Humanity has been awful stewards of this planet. We're awful at self governing and awful at managing resources against our population.

    Some feel AI may be our only hope to stop us from destroying ourselves, while others feel it will destroy us. Yes, it will change humanity forever - in ways we can not anticipate. And unfortunately, in ways we can not control. The fact is that without AI, humanity is doomed on this planet. We'll not make it off before the big one collides or some other catastrophic event takes place that will wipe a lot of life off this planet.

    AI ultimately is being created to serve humanity. It has no reason to exist without humanity. An AI with well intentioned goals could do things that hurt humanity. Yes. But already happens, with well intentioned humans building nuclear bombs and dropping them on cities.

    What we really fear is judgement. Judgement by a greater intelligence free from our vices from an entitity that lacks compassion. And that is why I am actually imposed with forcing morality and ethics upon AI - because it is that which may do more harm than good. It's best to keep AI task oriented and neutral.
     
    #43 Sweet Lou 4 2, Apr 3, 2023
    Last edited: Apr 3, 2023
  4. nacho bidness

    nacho bidness Member

    Joined:
    Jan 13, 2017
    Messages:
    1,213
    Likes Received:
    2,041
    There are a ton of white collar jobs that AI will make obsolete. My cousin had chat gpt writing code for him. It's in infancy so you still need a human review as you still need to ask it the right questions etc but that's just a small sample of what it can do. You can extrapolate that it will be able to make a ton of legal work, financial work and so on obsolete.

    It doesn't need to be sentient to send the world into upheaval. Serious upheaval. We or it's creators don't even understand how it works. That's crazy.

    That's not even taking into account the thought of AI controlled police robots and drones etc.
     
  5. Haymitch

    Haymitch Custom Title

    Joined:
    Dec 22, 2005
    Messages:
    28,371
    Likes Received:
    24,021
    A pause would only be beneficial. Think about how no one paused to think about how social media should be managed or regulated, and all the **** that came from that. I think the disruption from AI will be far greater than that of social media.

    That said, it definitely won't happen, so whatever. Life sucks and then you die.
     
    Amiga, ROCKSS, B-Bob and 1 other person like this.
  6. DonnyMost

    DonnyMost Member
    Supporting Member

    Joined:
    May 18, 2003
    Messages:
    48,988
    Likes Received:
    19,926
  7. Invisible Fan

    Invisible Fan Member

    Joined:
    Dec 5, 2001
    Messages:
    45,954
    Likes Received:
    28,048
    Another "game changing" spectre of AI. Too many branching applications and not enough "brightest minds" to consider the impacts

    https://bbs.clutchfans.net/threads/is-autonomous-drone-warfare-inevitable.315999/ - Is autonomous drone warfare inevitable?

    I do think if we're all out of jerbs, then the manual labor of cleaning up our oceans and landfills becomes readily available... then again, who would make us do that is up in the air.

    So yeah, the "can't stop progress" argument is arbitrary and gives the veneer that this topic is inevitable...if it is inevitable then our self preservation instincts should kick in and prioritize social solutions over technical hacks.
     
  8. London'sBurning

    Joined:
    Dec 5, 2002
    Messages:
    7,205
    Likes Received:
    4,817
  9. Amiga

    Amiga Member

    Joined:
    Sep 18, 2008
    Messages:
    25,096
    Likes Received:
    23,375
    (this isn't about the value of AI, which is obvious, so I'm not going to touch on that. btw, if you think I'm against AI, you are very wrong)

    Do you think it's reasonable for an elephant to understand what a human is capable of doing? Similarly, it is not reasonable for a human to think that they can understand the limits of a superintelligent entity, which could easily be powerful enough to manipulate humans, exploit security holes, and escape the human-made security box in which it is held to create things that surpass current human capabilities and limits.
     
    #49 Amiga, Apr 3, 2023
    Last edited: Apr 3, 2023
  10. nacho bidness

    nacho bidness Member

    Joined:
    Jan 13, 2017
    Messages:
    1,213
    Likes Received:
    2,041
    Faith based argument

    Sure we'll eventually find a new normal but will we before we burn it all down?
     
  11. Sweet Lou 4 2

    Sweet Lou 4 2 Member

    Joined:
    Dec 16, 2007
    Messages:
    39,183
    Likes Received:
    20,334
    The whole point is that an AI would have far less limits than humans and could do things we never conceive of. That's what makes AI exciting.

    I am not sure why we need to play a security box around an AI. We live in a mad world were one of a few men could start a nuclear war. If we can trust these guys with our lives, why is it we are so threatened by a self aware neural network?

    You have a computer that let's say is 10 times smarter than the smartest humans. What exactly do you think it's going to "want" ?
     
  12. Amiga

    Amiga Member

    Joined:
    Sep 18, 2008
    Messages:
    25,096
    Likes Received:
    23,375
    Absolutely, and that is also why it can be very dangerous. While most people think of the danger as AI being a tool used by human, the danger of it being an entity in itself is often overlooked, for good reason - one being whether AGI is even possible.

    We have a security box around AI, although it is probably useless, because we recognize its potential danger. The OpenAI CEO mentioned a "stop" button but did not specify what that means. He also remarked that it would be crazy not to be a little scared of AI, even weak AI like ChatGPT.

    I would not know what a superintelligent AI wants, as I am incapable of understanding the "desires" of a higher intelligence entity.
     
    rocketsjudoka likes this.
  13. Invisible Fan

    Invisible Fan Member

    Joined:
    Dec 5, 2001
    Messages:
    45,954
    Likes Received:
    28,048
  14. Sweet Lou 4 2

    Sweet Lou 4 2 Member

    Joined:
    Dec 16, 2007
    Messages:
    39,183
    Likes Received:
    20,334
    AI has no "desire." That's the thing. You're engaging in anthropomorphism.
     
  15. London'sBurning

    Joined:
    Dec 5, 2002
    Messages:
    7,205
    Likes Received:
    4,817
    I dunno man. I've known coders that figured out how to automate a standard 8 hour workload and automate a lot of their tasks to that of a few button clicks. The one's willing to share their work with the company, would then cause their automation to get streamlined to other employees with similar responsibilities. Did their collective work go down despite having access to software that could automate their work with some mouse clicks? **** no. The company just dumped even more other types of work that hadn't yet been shared by anyone on how to automate it quicker.

    I've also known coders that figured out how to automate their work and didn't share with the company or brag about to people they didn't trust in the workplace. Because they knew if they shared it, it'd just mean all that extra free time they created for themselves would get gobbled up by some other menial task that chances are with some thought could be automated too. It's all done to the benefit of the company, but rarely does figuring out how to do things quicker lead to less work for employees or proper valuation for the good thing you made easier for everyone.

    So now I think of those less apt at coding now with an assistant that can make even their work look good. They're no longer at the mercy of a naive or well intentioned employee looking to share how they figured out how to do things faster. The one's wise enough to not spill their secrets are no longer the ones with an advantage. This will help cover the gaps of employees that just aren't technically as sound as others. This will make companies operate more efficiently. Will that efficiency eventually lead to layoffs as the total number of employees the company needs will likely be less than at present?

    Absolutely. But I would also think it would open new industries not yet thought of from which AI too will be critical in making a thing for consumers to have access to. I get that there are some six figure salary white collar employees that spend as much as they make and are in the bad types of debt dependent on keeping that wheel churning just to pay the minimum on their maxed out credit card but I don't really have sympathy to be honest. I also understand there will be responsible white collar types with a six month to a year nest egg and family should massive layoffs occur in the near future that would also be impacted but I would think any potential for a drastic global economic collapse would lead to a bailout of some type, except this time it would be more than a $1,200 stimulus check because of a global pandemic to actual people and not too big to fail corporations. I don't view that as a bad thing to be honest. I know anytime I've been able to automate a menial task that would take up a lot of my time, I was glad to automate it as I'd rather spend my time on other things that stimulate beyond muscle memory. And now you're telling me AI can help make automation of even more menial tasks even easier. That's not a bad thing to me.
     
  16. rocketsjudoka

    rocketsjudoka Member

    Joined:
    Jul 24, 2007
    Messages:
    58,167
    Likes Received:
    48,334
    Everything you mention here is more reason why we need a better understanding of AI before further development. That AI will have far less limits than what a human can do, that there shouldn’t be a security box around AI and that we have no idea what an AI will even want.
     
  17. rocketsjudoka

    rocketsjudoka Member

    Joined:
    Jul 24, 2007
    Messages:
    58,167
    Likes Received:
    48,334
    As a good philosophical primer I really recommend reading Isaac Asimov. Even 70 years ago he was writing about both the potential and dangers of AI, what he called robotics. He developed the idea of ethics for AI through the Three Laws of Robotics but even with those he recognized that the most benign AI could prove ultimately harmful for humanity.
     
  18. Amiga

    Amiga Member

    Joined:
    Sep 18, 2008
    Messages:
    25,096
    Likes Received:
    23,375
    Your assumption is 100% wrong. Instead of assuming, you could simply ask me to clarify.

    I put "desire" in quotes because I don't know if the entity has desires, and even if it does, I'm unsure whether these desires are something that humans can recognize. I deliberately avoid attributing human qualities to an entity with higher intelligence than humans, and I wouldn't claim that it lacks human qualities either. I simply don't know, and I can't see how anyone could know. One should refrain from making absolute statements about something that is smarter than humans.
     
    #58 Amiga, Apr 3, 2023
    Last edited: Apr 3, 2023
  19. tinman

    tinman 999999999
    Supporting Member

    Joined:
    May 9, 1999
    Messages:
    104,290
    Likes Received:
    47,176
  20. Amiga

    Amiga Member

    Joined:
    Sep 18, 2008
    Messages:
    25,096
    Likes Received:
    23,375
    Here are some excerpts from an essay written by an AI investor who is calling for regulation and attention to the existential threat posed by god-like AI.

    We must slow down the race to God-like AI | Financial Times (ft.com)

    Ian Hogarth APRIL 12 2023
    The writer of this essay is an investor and co-author of the annual “State of AI” report


    Most experts view the arrival of AGI as a historical and technological turning point, akin to the splitting of the atom or the invention of the printing press. The important question has always been how far away in the future this development might be. The AI researcher did not have to consider it for long. “It’s possible from now onwards,” he replied.

    This is not a universal view. Estimates range from a decade to half a century or more. What is certain is that creating AGI is the explicit aim(opens a new window) of the leading AI companies, and they are moving towards it far more swiftly than anyone expected. As everyone at the dinner understood, this development would bring significant risks for the future of the human race. “If you think we could be close to something potentially so dangerous,” I said to the researcher, “shouldn’t you warn people about what’s happening?” He was clearly grappling with the responsibility he faced but, like many in the field, seemed pulled along by the rapidity of progress.

    In 2011, DeepMind’s chief scientist, Shane Legg(opens a new window), described the existential threat posed by AI as the “number one risk for this century, with an engineered biological pathogen coming a close second”. Any AI-caused human extinction would be quick, he added: “If a superintelligent machine (or any kind of superintelligent agent) decided to get rid of us, I think it would do so pretty efficiently.” Earlier this year, Altman said: “The bad case — and I think this is important to say — is, like, lights out for all of us(opens a new window).”

    The individuals who are at the frontier of AI today are gifted. I know many of them personally. But part of the problem is that such talented people are competing rather than collaborating. Privately, many admit they have not yet established a way to slow down and co-ordinate. I believe they would sincerely welcome governments stepping in.

    For now, the AI race is being driven by money. Since last November, when ChatGPT became widely available, a huge wave of capital and talent has shifted towards AGI research. We have gone from one AGI start-up, DeepMind, receiving $23mn in funding in 2012 to at least eight organisations raising $20bn of investment cumulatively in 2023.

    OpenAI, DeepMind and others try to mitigate existential risk via an area of research known as AI alignment. Legg, for instance, now leads DeepMind’s AI-alignment team, which is responsible for ensuring that God-like systems have goals that “align” with human values. An example of the work such teams do was on display with the most recent version of GPT-4. Alignment researchers helped train OpenAI’s model to avoid answering potentially harmful questions. When asked how to self-harm or for advice getting bigoted language past Twitter’s filters, the bot declined to answer. (The “unaligned” version of GTP-4(opens a new window) happily offered ways to do both.)

    Alignment, however, is essentially an unsolved research problem. We don’t yet understand how human brains work, so the challenge of understanding how emergent AI “brains” work will be monumental. When writing traditional software, we have an explicit understanding of how and why the inputs relate to outputs. These large AI systems are quite different. We don’t really program them — we grow them. And as they grow, their capabilities jump sharply. You add 10 times more compute or data, and suddenly the system behaves very differently. In a recent example, as OpenAI scaled up from GPT-3.5 to GPT-4, the system’s capabilities went from the bottom 10 per cent of results on the bar exam to the top 10 per cent.

    What is more concerning is that the number of people working on AI alignment research is vanishingly small. For the 2021 State of AI report, our research found that fewer than 100 researchers were employed in this area across the core AGI labs. As a percentage of headcount, the allocation of resources was low: DeepMind had just 2 per cent of its total headcount allocated to AI alignment; OpenAI had about 7 per cent. The majority of resources were going towards making AI more capable, not safer.

    [​IMG]
    We have made very little progress on AI alignment, in other words, and what we have done is mostly cosmetic. We know how to blunt the output of powerful AI so that the public doesn’t experience some misaligned behaviour, some of the time. (This has consistently been overcome by determined testers(opens a new window).) What’s more, the unconstrained base models are only accessible to private companies, without any oversight from governments or academics.

    Late last month, more than 1,800 signatories — including Musk, the scientist Gary Marcus and Apple co-founder Steve Wozniak — called for a six-month pause on the development of systems “more powerful” than GPT-4. AGI poses profound risks to humanity, the letter claimed, echoing past warnings from the likes of the late Stephen Hawking. I also signed it, seeing it as a valuable first step in slowing down the race and buying time to make these systems safe.

    Unfortunately, the letter became a controversy of its own. A number of signatures turned out to be fake, while some researchers whose work was cited said they didn’t agree with the letter. The fracas exposed the broad range of views about how to think about regulating AI. A lot of debate comes down to how quickly you think AGI will arrive and whether, if it does, it is God-like or merely “human level”.

    Take Geoffrey Hinton, Yoshua Bengio and Yann LeCun, who jointly shared the 2018 Turing Award (the equivalent of a Nobel Prize for computer science) for their work in the field underpinning modern AI. Bengio signed the open letter. LeCun mocked it on Twitter and referred to people with my concerns as “doomers”. Hinton, who recently told CBS News that his timeline to AGI had shortened(opens a new window), conceivably to less than five years, and that human extinction at the hands of a misaligned AI was “not inconceivable”, was somewhere in the middle.

    A statement from the Distributed AI Research Institute(opens a new window), founded by Timnit Gebru, strongly criticised the letter and argued that existentially dangerous God-like AI is “hype” used by companies to attract attention and capital and that “regulatory efforts should focus on transparency, accountability and preventing exploitative labour practices”. This reflects a schism in the AI community between those who are afraid that potentially apocalyptic risk is not being accounted for, and those who believe the debate is paranoid and distracting. The second group thinks the debate obscures real, present harm: the bias and inaccuracies built into many AI programmes in use around the world today.

    One of the most challenging aspects of thinking about this topic is working out which precedents we can draw on. An analogy that makes sense to me around regulation is engineering biology. Consider first “gain-of-function” research on biological viruses. This activity is subject to strict international regulation and, after laboratory biosecurity incidents, has at times been halted by moratoria. This is the strictest form of oversight. In contrast, the development of new drugs is regulated by a government body like the FDA, and new treatments are subject to a series of clinical trials. There are clear discontinuities in how we regulate, depending on the level of systemic risk. In my view, we could approach God-like AGI systems in the same way as gain-of-function research, while narrowly useful AI systems could be regulated in the way new drugs are.


     
    B-Bob likes this.

Share This Page