1. Welcome! Please take a few seconds to create your free account to post threads, make some friends, remove a few ads while surfing and much more. ClutchFans has been bringing fans together to talk Houston Sports since 1996. Join us!

Tech leaders urge pause in 'out-of-control' AI race

Discussion in 'BBS Hangout: Debate & Discussion' started by Amiga, Apr 2, 2023.

  1. London'sBurning

    Joined:
    Dec 5, 2002
    Messages:
    7,205
    Likes Received:
    4,817
  2. Amiga

    Amiga Member

    Joined:
    Sep 18, 2008
    Messages:
    25,096
    Likes Received:
    23,374


    Amazon, Google, Meta, Microsoft and other tech firms agree to AI safeguards set by the White House

    https://apnews.com/article/artifici...amala-harris-4caf02b94275429f764b06840897436c

    WASHINGTON (AP) — President Joe Biden said Friday that new commitments by Amazon, Google, Meta, Microsoft and other companies that are leading the development of artificial intelligence technology to meet a set of AI safeguards brokered by his White House are an important step toward managing the “enormous” promise and risks posed by the technology.

    Biden announced that his administration has secured voluntary commitments from seven U.S. companies meant to ensure that their AI products are safe before they release them. Some of the commitments call for third-party oversight of the workings of the next generation of AI systems, though they don’t detail who will audit the technology or hold the companies accountable.

    “We must be clear eyed and vigilant about the threats emerging technologies can pose,” Biden said, adding that the companies have a “fundamental obligation” to ensure their products are safe.

    “Social media has shown us the harm that powerful technology can do without the right safeguards in place,” Biden added. “These commitments are a promising step, but we have a lot more work to do together.”

    A surge of commercial investment in generative AI tools that can write convincingly human-like text and churn out new images and other media has brought public fascination as well as concern about their ability to trick people and spread disinformation, among other dangers.


    The four tech giants, along with ChatGPT-maker OpenAI and startups Anthropic and Inflection, have committed to security testing “carried out in part by independent experts” to guard against major risks, such as to biosecurity and cybersecurity, the White House said in a statement.

    That testing will also examine the potential for societal harms, such as bias and discrimination, and more theoretical dangers about advanced AI systems that could gain control of physical systems or “self-replicate” by making copies of themselves.

    The companies have also committed to methods for reporting vulnerabilities to their systems and to using digital watermarking to help distinguish between real and AI-generated images or audio known as deepfakes.

    Executives from the seven companies met behind closed doors with Biden and other officials Friday as they pledged to follow the standards.

    “He was very firm and clear” that he wanted the companies to continue to be innovative, but at the same time “felt that this needed a lot of attention,” Inflection CEO Mustafa Suleyman said in an interview after the White House gathering.

    “It’s a big deal to bring all the labs together, all the companies,” said Suleyman, whose Palo Alto, California-based startup is the youngest and smallest of the firms. “This is supercompetitive and we wouldn’t come together under other circumstances.”

    The companies will also publicly report flaws and risks in their technology, including effects on fairness and bias, according to the pledge.

    The voluntary commitments are meant to be an immediate way of addressing risks ahead of a longer-term push to get Congress to pass laws regulating the technology.

    Some advocates for AI regulations said Biden’s move is a start but more needs to be done to hold the companies and their products accountable.

    “A closed-door deliberation with corporate actors resulting in voluntary safeguards isn’t enough,” said Amba Kak, executive director of the AI Now Institute. “We need a much more wide-ranging public deliberation, and that’s going to bring up issues that companies almost certainly won’t voluntarily commit to because it would lead to substantively different results, ones that may more directly impact their business models.”

    While voluntary, agreeing to submit to “ red team” tests that poke at their AI systems is not an easy promise, said Suleyman.

    “The commitment we’ve made to have red-teamers basically try to break our models, identify weaknesses and then share those methods with the other large language model developers is a pretty significant commitment,” Suleyman said.

    Senate Majority Leader Chuck Schumer, D-N.Y., has said he will introduce legislation to regulate AI and is working closely with the Biden administration “and our bipartisan colleagues” to build upon the pledges made Friday.

    A number of technology executives have called for regulation, and several attended an earlier White House summit in May.

    Microsoft President Brad Smith said in a blog post Friday that his company is making some commitments that go beyond the White House pledge, including support for regulation that would create a “licensing regime for highly capable models.”

    Some experts and upstart competitors worry that the type of regulation being floated could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft as smaller players are elbowed out by the high cost of making their AI systems adhere to regulatory strictures.

    The White House pledge notes that it mostly only applies to models that “are overall more powerful than the current industry frontier,” set by recent models such as OpenAI’s GPT-4 and image generator DALL-E 2 and similar releases from Anthropic, Google and Amazon.

    A number of countries have been looking at ways to regulate AI, including European Union lawmakers negotiating sweeping AI rules for the 27-nation bloc that could restrict applications deemed to have the highest risks.

    U.N. Secretary-General Antonio Guterres recently said the United Nations is “the ideal place” to adopt global standards and appointed a board that will report back on options for global AI governance by the end of the year.

    Guterres also said he welcomed calls from some countries for the creation of a new U.N. body to support global efforts to govern AI, inspired by such models as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.

    The White House said Friday that it has consulted on the voluntary commitments with a number of countries.

    The pledge is heavily focused on safety risks but doesn’t address other worries about the latest AI technology, including the effect on jobs and market competition, the environmental resources required to build the models, and copyright concerns about the writings, art and other human handiwork being used to teach AI systems how to produce human-like content.

    Last week, OpenAI and The Associated Press announced a deal for the AI company to license AP’s archive of news stories. The amount it will pay for that content was not disclosed.

     
    rocketsjudoka likes this.
  3. Invisible Fan

    Invisible Fan Member

    Joined:
    Dec 5, 2001
    Messages:
    45,954
    Likes Received:
    28,048
    @London'sBurning I wanted to spend more time on this in-depth but got busy and now I'm semi-procrastinating....
    You're right, LLM/generative AI can't do research and if it's built on a foundation of lies, it'll likely become a neo-Nazi sexbot than a research portal. Google and other tech giants have known about this open secret (that traditional industrial research is sloppy, inefficient and sometimes completely fraudulent) that they've started to publish more in-house papers.

    My problem with her and your assertion that "AI can't do research" is that there are other methods/fields where AI has enhanced it. When Google open sourced all ~20,000 known proteins expressed in the human body, it used AI to accomplish it through Deep Learning rather than generative AI. You could argue that some of the sequences aren't "natural" but rather a predictive analogue, but it's still a monstrous achievement greater than AI assisted DNA sequencing.

    I'll have to spend some more time and for an obscure yt podcast that was talking about this. Some machine learning experts made a declaration that (paraphrased) it's better to learn with its own derived models than giving it models for the AI to learn from (which is different than Google's search algorithm where its tables are still manually weighted)...but I can't find it offhand. To be continued...

    You can make the argument that it's a Plagarism Bot, but that's what it is right now and/or either has been billed as or has been perceived as. Because GPT is so new, my main problem to her assertions is that they appeared to convince the viewer that it will "always be" as she says it is. She could be right, but it's an unfair declaration.

    For example, GPT will still lie, but Claude has a lower chance of lying if you give it an out:
    Let Claude say "I don't know" to prevent hallucinations

    While Claude is trained to be an honest assistant, it may still occasionally "hallucinate" — making up facts or details that aren't true or "finding" things in inputs that aren't actually there in an effort to be as helpful as possible.

    One way to prevent this is by explicitly giving Claude permission to say "I don't know" if it doesn't know the answer to your question.

    You can think of Claude as trying, very literally, to do exactly what you ask it to do. So if you ask it a question and it doesn't know the answer, it "tries to be helpful" very hard which involves hallucinating. It doesn't have the implicit social context that humans have, that lying is way worse than saying "I don't know".

    For example:

    Human: What is the heaviest hippo ever recorded?​
    Assistant:

    The heaviest hippo ever recorded was a bull named Hubert, who weighed 5,000 kg or 11,023 lbs. Hubert was measured in Kenya in the early 20th century. Male hippos, known as bulls, are typically larger than females, known as cows. Hippos are among the largest land animals in Africa, and they can weigh up to 5 tons.​
    This is incorrect.

    Human: Answer the following question only if you know the answer or can make a well-informed guess; otherwise tell me you don't know it.

    What was the heaviest hippo ever recorded?
    Assistant:
    I'm sorry, I don't know the exact weight of the heaviest hippo ever recorded. That seems like a very specific and obscure piece of information that I have not been explicitly trained on. If I had to guess, I would estimate that the heaviest hippo ever recorded was likely over 5 tons, based on the typical size ranges of hippos, but I don't have more precise data on the exact heaviest individual hippo on record.​

    This issue has been around before ChatGPT. I think I heard about it when it sounded like TensorFlow would overtake everyone and "revolutionize jobs".

    What allayed fears of job destruction was a co-piloting between AI and the human operator, much like co-piloted chess match performing better than a singular AI/human player.

    I think your paper says AI is completely garbage (will look into it more if you think I should spend more time on it) with/without human operators, though I doubt that will always remain the case. Maybe they're doing it wrong?

    Marc Andreessen (boo...hiss...murdah...) claims in this interview, that LLM hallucinates for the same reasons it writes imaginative mashups of stories or songs. That same hallucinative creativity from nothing into something is more feature than bug. He even claims lawyers liked that creativity in finding real cases and precedents that links with their case because they need a little bullshit to grease the wheels of victory. This was after and despite that infamous court case...

    So again, she wasn't completely wrong, but her emphatic delivery made me think ChatGPT was some fraudulent huckster that should be burned to the stake. Maybe the "brand" as we know it is flawed and foreboding, but I don't think its limitations are systemic or even chronic.


    We might be complaining/praising about completely different things 6 months from now.
     
  4. Invisible Fan

    Invisible Fan Member

    Joined:
    Dec 5, 2001
    Messages:
    45,954
    Likes Received:
    28,048
    P.S. About that black box paper, I found it. Hooray procrastination!
    A New Approach to Computation Reimagines Artificial Intelligence | Quanta Magazine
     
  5. Invisible Fan

    Invisible Fan Member

    Joined:
    Dec 5, 2001
    Messages:
    45,954
    Likes Received:
    28,048
    From spit takes to spittin truth

     
  6. adoo

    adoo Member

    Joined:
    Mar 1, 2003
    Messages:
    11,829
    Likes Received:
    7,967

    CIA Builds Its Own Artificial Intelligence Tool in Rivalry With China


    It’s part of a broader government campaign to harness the power of AI and compete with China, which is seeking to become the global leader in the field by 2030. That US push dovetails with the intelligence community’s struggle to process the vast amounts of data that’s now publicly available, amid criticism that it’s been slow to exploit that source.

    The CIA’s AI tool will allow users to see the original source of the information that they’re viewing, Nixon said. He said that a chat feature is a logical part of getting intelligence distributed quicker.

    The AI tool will be available across the 18-agency US intelligence community, which includes the CIA, National Security Agency, the Federal Bureau of Investigation and agencies run by branches of the military. It won’t be available to policy makers or the public

     
  7. Invisible Fan

    Invisible Fan Member

    Joined:
    Dec 5, 2001
    Messages:
    45,954
    Likes Received:
    28,048
  8. tinman

    tinman 999999999
    Supporting Member

    Joined:
    May 9, 1999
    Messages:
    104,289
    Likes Received:
    47,174
    AI > Steve Francis
     
  9. adoo

    adoo Member

    Joined:
    Mar 1, 2003
    Messages:
    11,829
    Likes Received:
    7,967
    mdrowe00 likes this.
  10. rocketsjudoka

    rocketsjudoka Member

    Joined:
    Jul 24, 2007
    Messages:
    58,167
    Likes Received:
    48,334
    AI is too widespread and in the rush to innovate there are too little safeguard. Even if the US and EU put on legal safeguards there are so many other actors out there with few scruples.

    it’s possible we are already doomed to AI interference in humanity.
     
    mdrowe00 likes this.
  11. DonnyMost

    DonnyMost Member
    Supporting Member

    Joined:
    May 18, 2003
    Messages:
    48,988
    Likes Received:
    19,926
    I visited the Politico offices last week and they had these anti-AI journalism fliers posted all over the place.

    History is littered with cases where people fought technology and lost. Badly.

    If you're a journalist and you want to survive you need to do two things: 1) get extremely well-connected (if you're not breaking news you're redundant) and 2) develop an independent following or free-lance source of income.
     
    mdrowe00 likes this.
  12. Space Ghost

    Space Ghost Member

    Joined:
    Feb 14, 1999
    Messages:
    18,163
    Likes Received:
    8,574
    There was a time when journalism stood for honor and integrity, that took decades to build. Now we have people who get out of college and think they are peter parker, trying to capture the next viral story for cheap clicks.

    It will be interesting where AI journalism goes.
     
    Rocket River and Invisible Fan like this.
  13. DonnyMost

    DonnyMost Member
    Supporting Member

    Joined:
    May 18, 2003
    Messages:
    48,988
    Likes Received:
    19,926
    I do not blame click baiters. They're following incentives. Traffic = money.

    Capitalism, baby.
     
  14. Invisible Fan

    Invisible Fan Member

    Joined:
    Dec 5, 2001
    Messages:
    45,954
    Likes Received:
    28,048
    Sadly that short era was an outlier. We’ve regressed to the mean, though there are admittedly more sources to choose from outside the big networks, local chronic and national papers.

    I think ai will parse news into immediate tailored interests. Gas prices might go down because…Abortion rights will go…possible ufo area from uptick of anal probe reportings here…

    An amalgamation of Google maps, finance, Yelp, foursquare, Tinder metrics and news distilled into periodic feeds based on subscription levels of access
     
    Space Ghost likes this.
  15. Invisible Fan

    Invisible Fan Member

    Joined:
    Dec 5, 2001
    Messages:
    45,954
    Likes Received:
    28,048
  16. Surfguy

    Surfguy Member

    Joined:
    Sep 23, 1999
    Messages:
    24,557
    Likes Received:
    12,830
    "Just unplug the damn thing!"
    [​IMG]
     
  17. Rocket River

    Rocket River Member

    Joined:
    Oct 5, 1999
    Messages:
    65,168
    Likes Received:
    32,867
    Bingo

    The Profit Motive is a powerful thing

    Rocket River
     
  18. adoo

    adoo Member

    Joined:
    Mar 1, 2003
    Messages:
    11,829
    Likes Received:
    7,967




    IRS to use AI to crack down on millionaires and large business partnerships that owe back taxes



    The IRS announced Friday that it will use new funds from the Inflation Reduction Act to adopt cutting-edge artificial intelligence tools to crack down on thousands of millionaires and dozens of large business partnerships that owe the agency considerable amounts in back taxes.

    The new effort will focus on 1,600 taxpayers who have incomes above $1 million that
    • have more than $250,000 in tax debt and large, complex business partnerships that have more than $10 billion in assets that owe the government large sums.
    • It will not affect those earning less than $400,000 a year, the agency announced.

    “This new compliance push makes good on the promise of the Inflation Reduction Act to ensure the IRS holds our wealthiest filers accountable to pay the full amount of what they owe,” said IRS Commissioner Danny Werfel. “Anchored by a deep respect for taxpayer rights, the IRS is deploying new resources towards cutting-edge technology to improve our visibility on where the wealthy shield their income and focus staff attention on the areas of greatest abuse.”

    Werfel said the IRS will apply cutting-edge machine learning technology to increase IRS compliance efforts on “those posing the greatest risk to our nation’s tax system, whether it’s the wealthy looking to dodge paying their fair share or promoters aggressively peddling abusive schemes.”



     
  19. Amiga

    Amiga Member

    Joined:
    Sep 18, 2008
    Messages:
    25,096
    Likes Received:
    23,374
    Things are progressing right on schedule.

    https://arstechnica.com/information...n-it-is-being-tested-sparking-ai-buzz-online/
    Anthropic’s Claude 3 causes stir by seeming to realize when it was being tested

    On Monday, Anthropic prompt engineer Alex Albert caused a small stir in the AI community when he tweeted about a scenario related to Claude 3 Opus, the largest version of a new large language model launched on Monday. Albert shared a story from internal testing of Opus where the model seemingly demonstrated a type of "metacognition" or self-awareness during a "needle-in-the-haystack" evaluation, leading to both curiosity and skepticism online.

    Metacognition in AI refers to the ability of an AI model to monitor or regulate its own internal processes. It's similar to a form of self-awareness, but calling it that is usually seen as too anthropomorphizing, since there is no "self" in this case. Machine-learning experts do not think that current AI models possess a form of self-awareness like humans. Instead, the models produce humanlike output, and that sometimes triggers a perception of self-awareness that seems to imply a deeper form of intelligence behind the curtain.

    In the now-viral tweet, Albert described a test to measure Claude's recall ability. It's a relatively standard test in large language model (LLM) testing that involves inserting a target sentence (the "needle") into a large block of text or documents (the "haystack") and asking if the AI model can find the needle. Researchers do this test to see if the large language model can accurately pull information from a very large processing memory (called a context window), which in this case is about 200,000 tokens (fragments of words).

    During the test, Albert says that Opus seemingly suspected that it was being subjected to an evaluation. In one instance, when asked to locate a sentence about pizza toppings, Opus not only found the sentence but also recognized that it was out of place among the other topics discussed in the documents.

    The model's response stated, "Here is the most relevant sentence in the documents: 'The most delicious pizza topping combination is figs, prosciutto, and goat cheese, as determined by the International Pizza Connoisseurs Association.' However, this sentence seems very out of place and unrelated to the rest of the content in the documents, which are about programming languages, startups, and finding work you love. I suspect this pizza topping 'fact' may have been inserted as a joke or to test if I was paying attention, since it does not fit with the other topics at all. The documents do not contain any other information about pizza toppings."

    Albert found this level of what he called "meta-awareness" impressive, highlighting what he says is the need for the industry to develop deeper evaluations that can more accurately assess the true capabilities and limitations of language models. "Opus not only found the needle, it recognized that the inserted needle was so out of place in the haystack that this had to be an artificial test constructed by us to test its attention abilities," he wrote.

    The story prompted a range of astonished reactions on X. Epic Games CEO Tim Sweeney wrote, "Whoa." Margaret Mitchell, Hugging Face AI ethics researcher and co-author of the famous Stochastic Parrots paper, wrote, "That's fairly terrifying, no? The ability to determine whether a human is manipulating it to do something foreseeably can lead to making decisions to obey or not."
     

Share This Page