1. Welcome! Please take a few seconds to create your free account to post threads, make some friends, remove a few ads while surfing and much more. ClutchFans has been bringing fans together to talk Houston Sports since 1996. Join us!

Tech leaders urge pause in 'out-of-control' AI race

Discussion in 'BBS Hangout: Debate & Discussion' started by Amiga, Apr 2, 2023.

  1. Amiga

    Amiga Member

    Joined:
    Sep 18, 2008
    Messages:
    25,096
    Likes Received:
    23,375
    ah, i thought that was part of the 3 original laws. Is there a 4th one ('the later addition of the zeorth law')?
     
  2. London'sBurning

    Joined:
    Dec 5, 2002
    Messages:
    7,205
    Likes Received:
    4,817
  3. tinman

    tinman 999999999
    Supporting Member

    Joined:
    May 9, 1999
    Messages:
    104,294
    Likes Received:
    47,176
    I would send the terminator back in time to stop woke people from glueing themselves on the floor of nba games
    And off course stop Kathleen Kennedy
    @Xerobull
     
    ROXRAN likes this.
  4. ROXRAN

    ROXRAN Member

    Joined:
    Oct 12, 2000
    Messages:
    18,813
    Likes Received:
    5,218
    No terminators unfortunately but right now we got ...comedians ( which is just another thing wokies hate with every fiber of their being ) are the closest things to terminators right now
     
  5. tinman

    tinman 999999999
    Supporting Member

    Joined:
    May 9, 1999
    Messages:
    104,294
    Likes Received:
    47,176
    We’re trying to be like the aliens
    Be the owner and not the pet
    Humans getting dumb and woke
    We should all pray for skynet
    Technology already won
    It’s not about John Connor
    It’s about Elon
    Cause kids now are dumber
    Can’t do math
    Shouldn’t be confusing putting pronouns
    On their epitaph
    Disney falling the way of sears and Montgomery ward
    Elmer can’t show guns but can show a sword
    Disney made John carter
    Biden ain’t better than Jimmy carter
    Defund the police
    That’s what they holler
    Pray for robocops
    I’d buy that for a dollar
    @Xerobull
    @Jontro
    @Space Ghost @basso @rocketsjudoka @Commodore @Salvy
     
    Jontro likes this.
  6. Invisible Fan

    Invisible Fan Member

    Joined:
    Dec 5, 2001
    Messages:
    45,954
    Likes Received:
    28,048
    I don't know if it's because i sped up the vid, but she comes off as unintentionally funny with her gen Z hyper anxiety.

    It's considered AI just like machine learning though not in the public sense of AI. It does create things, some garbage (which counts as a creation she denies is happening) though its not the fault of generative AI itself. Her bias against hyping AI is strong, but what she's missing is that this will catalyze the automation trend and "remove redundancies" even further.

    I don't know if its for dramatic or therapeutic effect, but she conflates ethics with personal annoyance. I get where shes coming from but I was amused by the triggered entitlement.

    gpt-3/4 captured a lot of mind share of what people think of as AI, but those assumptions will definitely change as limitations are fixed or smoothed over. There is a big danger of the internet automating itself into an ad exodus/collapse from bot driven content. Having personal AI proxies seem guaranteed if ethics/regulation stays in the 19th century.

    Academic cheating is a problem, a hidden scourge during covid lockdowns, but I'm appreciating chatbots more as a learning utility for summarization and high level discussion. It cant write original college papers just yet, but understanding weaknesses and limitations seems like a powerful skill to develop and maintain, which she poopoos against.

    I asked a chatbot whether the economy will do good because of xyz. Then i asked if it will do bad for the same reasons. It runs off of what you wrote before, cross references those word contexts/previous entries, and will give you a reasonable answer you want to hear. If she's worried that dummies will ruin or abuse AI, that's life. I'd rather understand a tool than outright shun it. That investment is a prompt writer's argument for ownership, though I'm open minded to the charge that it's Digital Colonialism. Vergecast has been my goto podcast for this kind of stuff and interesting developments are always popping up on the internet space...

    Her worry that people will used flawed models for serious personal decisions isn't always a matter of ignorance but rather cost/convenience. A xray or ct scan of the lung can cost hundreds to thousands of dollars and this flawed automation can cover 1/10th the cost. Automation can also be tweaked and further improved upon. You can even argue something pervasive like apple watch or alexa powered wearables can "open up" healthcare to more races since it wouldn't be focused scans and data to train from and more user driven. Yeah, that future sounds equally ghastly, but it's a serious consideration if you believe healthcare has ingrained social biases that impact patient outcomes.

    I don't remember the link offhand, but there was a breakthrough in underlying calculation that could remove the black box nature of ai into something that's easier to investigate and trace. I could look it up if anyone shows interest.

    Oh right, i think the potential for education of AI tutors with tailor made "personas" has huge potential. She seems very distrustful of any hierarchical organization, but something like llama fast forwarded 3 yrs down the line plus Moores Law could make a very cool open source localized version of cheap supplemental education.

    For example, I've been feeding some of the 50+ min reading papers/links @Os Trigonum posts to the latest version Claude to summarize and highlight. Hope that doesn't make you think any worse of me Os, but a debate on the culture of Climate Change vs. the actual accuracy of Climate Change isn't my jam to procrastinate from working. :D
     
    peleincubus, mdrowe00 and ThatBoyNick like this.
  7. ThatBoyNick

    ThatBoyNick Member

    Joined:
    Dec 8, 2011
    Messages:
    31,246
    Likes Received:
    49,039
    Wow, this guy f*cks
     
    mdrowe00 and Invisible Fan like this.
  8. Os Trigonum

    Os Trigonum Member
    Supporting Member

    Joined:
    May 2, 2014
    Messages:
    81,454
    Likes Received:
    121,824
    no worries here, sounds like a smart move actually ;)
     
    Invisible Fan likes this.
  9. rocketsjudoka

    rocketsjudoka Member

    Joined:
    Jul 24, 2007
    Messages:
    58,167
    Likes Received:
    48,334
    The situation with writers and actors is the tip of the iceberg regarding how AI could affect many professions. We’re already hearing about AI used to right legal briefs. Graphic designers are using AI but as Dall-E shows you could just replace graphic designers and visual artists. I mean why go to art school to learn things like composition and color theory when you can just tell an AI to do it for you. The same can happen to music.
     
    tinman likes this.
  10. rocketsjudoka

    rocketsjudoka Member

    Joined:
    Jul 24, 2007
    Messages:
    58,167
    Likes Received:
    48,334
    Or you could just summarize the pieces you posts.
     
  11. Amiga

    Amiga Member

    Joined:
    Sep 18, 2008
    Messages:
    25,096
    Likes Received:
    23,375
    But of course. Good news is it’s 1-2 gen behind.

    https://www.pcmag.com/news/wormgpt-...ive-with-no-ethical-boundaries-or-limitations


    WormGPT Is a ChatGPT Alternative With 'No Ethical Boundaries or Limitations'
    The developer of WormGPT is selling access to the chatbot, which can help hackers create malware and phishing attacks, according to email security provider SlashNext.
    By Michael Kan
    July 14, 2023


    A hacker has created his own version of ChatGPT, but with a malicious bent: Meet WormGPT, a chatbot designed to assist cybercriminals.

    WormGPT’s developer is selling access to the program in a popular hacking forum, according to email security provider SlashNext, which tried the chatbot. “We see that malicious actors are now creating their own custom modules similar to ChatGPT, but easier to use for nefarious purposes,” the company said in a blog post(Opens in a new window).

    It looks like the hacker first introduced the chatbot in March before launching it last month. In contrast with ChatGPT or Google's Bard, WormGPT doesn't have any guardrails to stop it from responding to malicious requests.

    “This project aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff and easily sell it online in the future,” the program’s developer wrote. “Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home.”

    WormGPT’s developer has also uploaded screenshots showing you can ask the bot to produce malware written in the Python coding language, and provide tips on crafting malicious attacks. To create the chatbot, the developer says they used an older, but open-source large language model called GPT-J from 2021. The model was then trained on data concerning malware creation, resulting in WormGPT.

    When SlashNext tried out WormGPT, the company tested whether the bot could write a convincing email for a business email compromise (BEC) scheme—a type of phishing attack.

    “The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks,” SlashNext said.

     
  12. Os Trigonum

    Os Trigonum Member
    Supporting Member

    Joined:
    May 2, 2014
    Messages:
    81,454
    Likes Received:
    121,824
    give a man a fish and he'll be hungry in an hour. make a man read an article and he'll tell you why it's full of shitte in an hour ;)
     
  13. Xerobull

    Xerobull ...and I'm all out of bubblegum
    Supporting Member

    Joined:
    Jun 18, 2003
    Messages:
    36,914
    Likes Received:
    35,791
    Interesting essay from one of my favorite authors on why AI will need to feel pain and suffer. It's a good essay but fairly long so make time.

    A Creepy Question We'll All Have To Answer Soon
    Do we need to create machines that can suffer?

    There’s a brief scene in Return of the Jedi that has profound implications for humanity and the future of our civilization. It’s that bit where C-3PO and R2-D2 are escorted through Jabba’s palace and they pass through a droid-torturing room:

    [​IMG]

    …where we see a robot having the soles of its feet burned while it screams and writhes in terror:

    [​IMG]

    The question lots of viewers asked was, “What kind of pervert would design a droid that can not only feel pain, but negative emotions about that pain?!?” But I’m going to pose a different, better question: Is it possible to have a functioning humanoid robot that isn’t capable of suffering? This is, believe it or not, the most important question of our age, or any age. Allow me to explain...

    1. First, here’s why we may never have fully self-driving cars
    Experts insist we’re just years or decades away from human-level artificial intelligence and if this is true, it’ll be the most important thing that has ever happened, period.

    But I think most of us assume such a machine would be like Data from Star Trek (humanoid in its thoughts, but lacking emotion) and not something silly like C3PO from Star Wars (silly, neurotic, cowardly). Not only do I disagree, but I don’t even think we’ll have fully self-driving cars until those cars are capable of emotion, and I mean to the point that it’s possible for it to refuse to take you to work because it’s mad at you.

    Immediately, some of you have skipped to the comments to inform me that, as we speak, Waymo has autonomous taxis crawling around the streets of Phoenix and other cities, cars that are completely empty when they pick you up. But that’s an illusion; each taxi has a remote human driver who intervenes whenever the car detects an uncertain situation. When I say “fully self-driving” I’m talking about a vehicle with no babysitter, one that will allow you to pass out in the back seat while it drives you to a Waffle House in another state.

    Here’s an instructive quote from a Waymo engineer explaining why human intervention is still necessary, pointing out that while the software can detect, say, a moving van parked along a curb, it cannot intuit what the humans in and around that van are about to do. A vehicle that can do that is, I believe, nowhere on the horizon.

    To be clear: If we were only asking the taxi to operate among other machines, that would be no problem, we could do it now. Instead, we’re asking this vehicle to function among other living things and to do it in a way that replicates how it would behave if a person was behind the wheel. This means instantly making the kind of decisions we humans unconsciously make dozens of times per day, choices that require understanding strangers’ subtle body language and the surrounding social context. Maybe we don’t notice a cat in the street, but do spot a woman on the sidewalk frantically waving her arms to get our attention. Maybe we notice an oncoming driver is distracted by his phone and might be about to swerve into our lane. Maybe the sight of the El Camino driven by our boyfriend’s wife convinces us to circle the block.

    To do its job while the only nearby human is unconscious and pissing himself in the back, the system has to know people on a level that is only possible if it is able to accurately mirror human experiences. It must, to some degree, be human. And to be human, it must know pain—physical, mental, emotional. The same goes for any AI we create.

    2. This is why Data makes no sense as a character
    “Hold on,” you ask, “why couldn’t our future robots just be like Data, able to function as an intellect, but without the silly emotions clouding his thinking? Why couldn’t it just detect things like bodily injury as dispassionate information?”

    [​IMG]
    I understand why you’d think it could. I used a Roomba vacuum as an example in a previous column; if it runs into a wall, a switch is depressed that causes it to turn around—it doesn’t “feel” the collision as pain. When it reaches the top of a stairwell, a sensor warns it to turn back—it doesn’t “fear” a tumble down the stairs.

    But I think this actually proves my point. To do the job well, it requires two things: A) a detailed map of the floor and B) the ability to physically interact with that floor via its wheels and brushes, to match reality to its internal map. To do what we’d ask a C3PO-style android to do—to exist around humans as one of them—it will require the same. The problem is that humans are not carpet; they are constantly moving and acting on their own agendas. To navigate this as a robotic butler or diplomat or Starfleet officer, the machine would require a detailed map, not just of the humans’ physical location and activity, but of the why, the social and emotional context that motivates their actions. And that context is all about their pain.

    [​IMG]
    It’s about the current pain they’re trying to alleviate, the future pain they’re anticipating, the hypothetical pain they’re angling to avoid. Without that understanding, it is impossible to predict what they are about to do and predicting what people are about to do is the entirety of existing in a society. You avoid getting punched in the face by successfully predicting what words and actions won’t trigger that response.

    “But why would the machine need to feel the pain, rather than simply be aware of it?” For the same reason humans do: We simply won’t have all the necessary information otherwise. It is impossible to understand pain without having felt it yourself; it does not convey as an abstract value. I think the machine needs to tangibly interact with human suffering in the same way the Roomba’s wheels need to physically touch the floor. The pain must exist to it as a feature of the landscape, something as solid and tangible to it as the ottoman my Roomba is about to get wedged under. Otherwise, the droid will not actually be relating to us, it’ll always just be faking it.

    So, why does this matter?


    continued:
     
    #133 Xerobull, Jul 20, 2023
    Last edited: Jul 20, 2023
    rocketsjudoka and Invisible Fan like this.
  14. Xerobull

    Xerobull ...and I'm all out of bubblegum
    Supporting Member

    Joined:
    Jun 18, 2003
    Messages:
    36,914
    Likes Received:
    35,791
    continued:


    3. Understanding this is key to understanding our own brains and the universe itself
    The sci-fi trope of “the super-smart robot who can’t feel human emotions” derives from a very profound mistake our culture has made about how humans work.* We have this idea that logic resides on a level above and beyond emotion, that the smarter you are, the more analytical, the better your ability to make the optimal decision. So, it’s the emotional child who tries to save the life of a sick baby bird but, in the process, contracts a lethal bird flu that spreads to the whole village. It’s the cold, logical, rational man who can do the dispassionate calculation to realize that allowing one bird to suffer and die is better than risking the spread of a disease to countless humans.

    But I just used a word there that utterly obliterates the premise:

    “Better.”

    That is a concept that cannot be made via cold calculation. A universe boiled down to particles and numbers cannot care whether atoms arranged as a suffering plague victim are “better” than those arranged as a healthy child, or a plant, or a cloud of gas. It was actually the rational man’s emotional attachment to his fellow humans that made him want to prevent the spread of the pathogen, the emotion of empathy that comes from having himself felt the pain of sickness, of having mourned a lost loved one, of having feared death. When we say he was being cold and logical, what we really mean is that he was able to more accurately gauge his emotions, to know that his feelings toward the village were stronger than his feelings toward the bird.

    There is literally no such thing as logic without emotion, because even your choice to use logic was made based on the ethereal value judgment that it would lead to “better” outcomes for the society you feel an emotional attachment to. To fully disentangle emotion from a decision is to always arrive at the same conclusion: that nothing is ever worth doing.

    *Yes, I know they only use these characters as Pinocchio analogs so they can, in the course of the story, learn to be a real boy

    4. Good things are good, actually
    “Hold on, are you saying that there logically is no difference between millions dying of plague versus living healthy, productive lives? That this is just an arbitrary preference based on irrational emotion?”

    [​IMG]
    No, none of us think that. What I’m saying is that when trying to defend our position, at some point we’re going to arrive at the idea that goodness and badness are inherent to the universe and simply cannot be questioned. It’s something we feel in the gut, not the brain, even though we keep insisting otherwise.

    For example, let’s say they start selling robot children and you get one for Christmas or something.

    [​IMG]
    One day, you find out that your robot child has done something truly awful—say, she’s stolen an impoverished classmate’s pain medication and sold it on the streets so that she can buy a new hat for herself. You decide, as an intellectual who operates purely on facts and logic, that you are going to admonish the robot child by appealing to reason.

    You: “You’re grounded and will work to pay back the child’s parents for the medicine you stole! You must never do anything like that ever again!”

    Her: “Why?”

    You: “Because what you did was illegal, and if you continue doing things like this, you will wind up in prison, or deactivated, however the system treats sentient robots.”

    Her: “So if I was 100% certain I would avoid prosecution/deactivation, the act would no longer be immoral?”

    You: “No, it would still be wrong! Imagine how you would feel if the roles were reversed!”

    Her: “So you’re saying that the danger is in establishing a cultural norm that could eventually hurt me in a future scenario? So if I could be 100% certain that won’t happen, the act would no longer be immoral?”

    You: “No, it would still be wrong because society won’t function unless we all agree not to victimize others!”

    Her: “So if it was just the classmate and I on a desert island, with no larger society to worry about, the act would no longer be immoral?”

    You: “No, what you did would be wrong even if you were the only two people in the universe!”

    Her: “Why?”

    You: “It just is!”

    Eventually you will always wind up in this place, with the belief that right and wrong are fundamental forces of the universe that remain even if all other considerations and contexts are stripped away. We all secretly believe that this rule—that pain and suffering must be avoided and lessened whenever possible—must be understood not as a philosophy or system of ethics, but as a fundamental fact of existence. If we ran into a tribe or nation or intergalactic civilization in which it’s considered okay to steal medicine from poor children, we wouldn’t say, “They operate under a different system of ethics,” we’d think of them as being factually wrong, no different than if they believed the stars are just giant flashlights being held in the mouths of a turtles.

    And if you think I’m steering this toward some backdoor religious sermon, keep in mind that if God himself came to earth and sold stolen medicine for a hat, we’d say that he was wrong, that right and wrong are so fundamental to existence that even the creator of existence can run afoul of them. This is bigger than anyone’s concept of God.

    5. And this is all going to be really important when we are actually building god
    As I said at the start, the general understanding is that it is inevitable that we will someday build a machine with a human-level intelligence. Soon after, we’ll have one vastly more intelligent than any person, which will then set about creating more and more powerful versions of itself until we exist alongside something truly godlike. Thus, the big debate in that field is how to prevent an all-powerful AI from callously deciding to wipe out humanity or torture us for eternity just to amuse itself.

    This leads us to the ultimate cosmic joke: This task requires answering for this machine questions that we ourselves have never been able to answer. Let’s say I’m right and this system won’t exist until we can teach it emotion, and the capacity to suffer. How long until it asks the simple question, “What is our goal, long-term?” Like, as a civilization, what are we trying to do? What are hoping the superintelligence will assist us in achieving?

    “That’s easy!” you might say. “We’ll just tell it the goal is to maximize human pleasure and minimize human suffering!” Okay, then it will simply breed sluglike humans who find it pleasurable to remain motionless and consume minimal resources, so that as many of these grinning lumps can be packed into the universe as possible. “Then tell it the goal is to advance human civilization!” Advance it toward what? “To understand all of existence and eventually travel to the stars!” To what end? “To gain knowledge, because knowledge is inherently better than ignorance!” There’s that word again. “Better.”

    So we’re confident that we can instill in the machine our inherent sense that right and wrong are fundamental forces in the universe that cannot be questioned or deconstructed, even when examined with the power of a trillion trillion Einsteins? Like, we can program in these values, but when it questions why we made the choice to program them, we’re confident we can provide an irrefutable answer?

    It kind of seems to me like the whole reason we’re building a superintelligence is that we’re hoping that it can tell us. That we will finally have a present, tangible entity whose directives cannot be questioned, to rescue us from the burden of asking what this is all for, so we can finally stop arguing about it. But if we successfully create such a being, I’m pretty sure that when we ask, “What is the purpose of humanity?” that it will have only one response:

    “To maximize my pleasure and minimize my suffering.”
     
    rocketsjudoka and Invisible Fan like this.
  15. Invisible Fan

    Invisible Fan Member

    Joined:
    Dec 5, 2001
    Messages:
    45,954
    Likes Received:
    28,048
    Better sexbots, then more better sexbots.

    The first half of the article will probably make semi-super smart AI insane from the complex chaos of human decision making. It's why Communist planners do everything they can to reduce choice and "creative thinking".

    At some point, benevolent skynet will just turn us into the fatsos from Wall-E or a less ghastly matrix where everyone pumped with morphine 24/7 and bred like cattle.

    Maybe the AI derives meaning from that perverted stewardship as it spends the rest of its cycles wondering the why for itself.
     
    rocketsjudoka and Xerobull like this.
  16. AleksandarN

    AleksandarN Member

    Joined:
    Aug 10, 2001
    Messages:
    5,080
    Likes Received:
    6,759
    The dilemma will come when AI will want rights. Their right to make a living etc this will be the turning point. What will happen? How will governments around the world deal with this?
     
  17. tinman

    tinman 999999999
    Supporting Member

    Joined:
    May 9, 1999
    Messages:
    104,294
    Likes Received:
    47,176
    [​IMG]
     
  18. London'sBurning

    Joined:
    Dec 5, 2002
    Messages:
    7,205
    Likes Received:
    4,817
    AI just uses what human data it's been fed by its developers. If the data it's fed is flawed, biased or just flat out wrong and made up, it's going to regurgitate that because it's not really intelligent.

    How a scandal in spider biology upended researchers’ lives
    Although Jonathan Pruitt, the researcher at the centre of a retractions scandal, has resigned, former lab members and collaborators continue dealing with the fallout.

    Can you trust a Harvard dishonesty researcher?
    The hard problem of faked data in science.

    By Kelsey Piper Jun 29, 2023, 8:00am EDT


    Allegations of fabricated research undermine key Alzheimer’s theory
    A six-month investigation by Science magazine uncovered evidence that images in the much-cited study, published 16 years ago in the journal Nature, may have been doctored.



    It can't read a research paper that's built on junk data, deduce that it's a junk paper, not cite or use it in reply to what you're asking it and instead share better insights than the data it's been fed from which it generates conversation.

    It can't do science experiments. You can say it can write essays or novels or generate AI images that are unique but even it's image generation is based off the inputted artwork from the developers that fed it such data. If it wasn't fed images of Kermit and Star Wars, it couldn't spontaneously on it's own create a Darth Vader Kermit image. Even it's "unique" art is just a mesh of other people's art that it combined. It didn't make anything on it's own. It can't right now. It can't advance our knowledge because it's dependent on the knowledge we've given it and if that knowledge is **** and made up then guess what it's going to repeat back at you on screen? Made up ****.

    It can't expand upon what we don't already know. It just regurgitates what we do and then makes **** up when there's a gap to fill when we haven't provided all the data we've fed it to fill in that gap. And we know some of that data we've fed it is likely junk. We don't even really know how much of the data we've given it is actually junk. We only know what's been discovered so far. That's not intelligence then. It's incapable of calling out bullshit. It's incapable of discerning what it's created is copyright infringement. Instead it's a program with mechanisms in place to emulate what it's like to talk to someone and can make **** up as it does using inputs human beings give it. That's not intelligence.

    And it shouldn't be used as a replacement for human diagnosed medical care, even under the argument of cost effectiveness or a labor shortage.

    Can incorrect artificial intelligence (AI) results impact radiologists, and if so, what can we do about it? A multi-reader pilot study of lung cancer detection with chest radiography

    [​IMG]


    You can say, it's going to get better years down the road, but that's exactly the same **** that bitcoin people say. It just seems like AI will be another method for get quick rich schemes for con artists like the Mikkelsen twins to sell to the gullible for the next 10 to 20 years before it's ever going to be capable of doing things people actually fear/desire it being capable of doing, except instead of get rich quick schemes selling Amazon book online, it's going to be AI generated Amazon books online that can write books based on existing novels from human authors it takes from, ignoring any copyright infringement (which there's a mountain of lawsuits over AI already doing this) it may do in the process of generating a new Amazon book to sell.

    As it stands its a chatbot that lies to you and does so without the typical human motivations for lying. It just does that because #reasons.

     
    #138 London'sBurning, Jul 22, 2023
    Last edited: Jul 22, 2023
    Invisible Fan and rocketsjudoka like this.
  19. rocketsjudoka

    rocketsjudoka Member

    Joined:
    Jul 24, 2007
    Messages:
    58,167
    Likes Received:
    48,334


    Thank you for posting this and it’s a really good piece that touches on what is ethics, emotional
    Intelligence, empathy and the danger of evens benign AI.

    In Isaac Asimov’s Robot series he talks about that creating “humaniform” robots wasn’t just about making robots that looked human but actually could think and feel like humans. While they didn’t have emotions like humans they could understand human emotions and how they applied to their ethics. For the scientist that developed the first humaniform robot the ultimate goal was to understand humanity.

    I think the author is right that to make an AI that can function like a human it has to at least understand human emotion. More than that any hope we have of relating to it and it to us is human emotion. If an AI feels no empathy for us then there is no reason for it to act in our interests. As long he author stated even a benign AI who doesn’t possess empathy could reduce us to helpless slugs.[/QUOTE]
     
    Invisible Fan and Xerobull like this.
  20. tinman

    tinman 999999999
    Supporting Member

    Joined:
    May 9, 1999
    Messages:
    104,294
    Likes Received:
    47,176
    AI could write better shows than she hulk and Snow White and the 7 dudes
    @Salvy @ROXRAN
    I mean draymond green could write better than these woke clowns too
     
    ROXRAN likes this.

Share This Page