How well do we know "Spaceghost"? But seriously, old BBS pals aside, we will soon / already be mobbed by AI internet personalities. Another good reason to limit social media. Facebook, for instance, has floored the accelerator on releasing these agents into otherwise-human communities. It's almost like the modern world won't let a thinking person become anything but some sort of prepper. My inner gun rack is never too far submerged, as Mrs. B-Bob will attest.
I remember this from decades ago and the story still gets revived from time to time. Weizenbaum’s nightmares: how the inventor of the first chatbot turned against AI Computer scientist Joseph Weizenbaum was there at the dawn of artificial intelligence – but he was also adamant that we must never confuse computers with humans In 1966, an MIT professor named Joseph Weizenbaum created the first chatbot. He cast it in the role of a psychotherapist. A user would type a message on an electric typewriter connected to a mainframe. After a moment, the “psychotherapist” would reply. User: Men are all alike. Chatbot: IN WHAT WAY User: They’re always bugging us about something or other. Chatbot: CAN YOU THINK OF A SPECIFIC EXAMPLE User: Well, my boyfriend made me come here. Chatbot: YOUR BOYFRIEND MADE YOU COME HERE User: He says I’m depressed much of the time. Chatbot: I AM SORRY TO HEAR YOU ARE DEPRESSED Weizenbaum published this sample exchange in a journal article that explained how the chatbot worked. The software was relatively simple. It looked at the user input and applied a set of rules to generate a plausible response. He called the program Eliza, after Eliza Doolittle in Pygmalion. The cockney flower girl in George Bernard Shaw’s play uses language to produce an illusion: she elevates her elocution to the point where she can pass for a duchess. Similarly, Eliza would speak in such a way as to produce the illusion that it understood the person sitting at the typewriter. “Some subjects have been very hard to convince that Eliza (with its present script) is not human,” Weizenbaum wrote. In a follow-up article that appeared the next year, he was more specific: one day, he said, his secretary requested some time with Eliza. After a few moments, she asked Weizenbaum to leave the room. “I believe this anecdote testifies to the success with which the program maintains the illusion of understanding,” he noted. Eliza isn’t exactly obscure. It caused a stir at the time – the Boston Globe sent a reporter to go and sit at the typewriter and ran an excerpt of the conversation – and remains one of the best known developments in the history of computing. More recently, the release of ChatGPT has renewed interest in it. In the last year, Eliza has been invoked in the Guardian, the New York Times, the Atlantic and elsewhere. The reason that people are still thinking about a piece of software that is nearly 60 years old has nothing to do with its technical aspects, which weren’t terribly sophisticated even by the standards of its time. Rather, Eliza illuminated a mechanism of the human mind that strongly affects how we relate to computers. Early in his career, Sigmund Freud noticed that his patients kept falling in love with him. It wasn’t because he was exceptionally charming or good-looking, he concluded. Instead, something more interesting was going on: transference. Briefly, transference refers to our tendency to project feelings about someone from our past on to someone in our present. While it is amplified by being in psychoanalysis, it is a feature of all relationships. When we interact with other people, we always bring a group of ghosts to the encounter. The residue of our earlier life, and above all our childhood, is the screen through which we see one another. This concept helps make sense of people’s reactions to Eliza. Weizenbaum had stumbled across the computerised version of transference, with people attributing understanding, empathy and other human characteristics to software. While he never used the term himself, he had a long history with psychoanalysis that clearly informed how he interpreted what would come to be called the “Eliza effect”. As computers have become more capable, the Eliza effect has only grown stronger. Take the way many people relate to ChatGPT. Inside the chatbot is a “large language model”, a mathematical system that is trained to predict the next string of characters, words, or sentences in a sequence. What distinguishes ChatGPT is not only the complexity of the large language model that underlies it, but its eerily conversational voice. As Colin Fraser, a data scientist at Meta, has put it, the application is “designed to trick you, to make you think you’re talking to someone who’s not actually there”.... ____________________________ Much more about this at the link.
The internet is going to turn into such an amazing sh!!show that we will be looking back fondly upon the glory days of now
I was told this was a work of fiction, but all of this red pill/blue pill, prophecies, AI agents, marveling at our own magnificence as we give birth to AI, all long before the internet was mainstream and the idea of living in a matrix existed.
Atlas, the Iconic Boston Dynamics Robot, Now Functions Entirely on Its Own Boston Dynamics’ Atlas robot has long been admired for its impressive agility, often performing feats that seem to defy the limits of humanoid robotics. But now, Atlas has reached a new milestone in its evolution—autonomy. Thanks to advancements in both hardware and software, Atlas is no longer just a display of physical prowess. It can now complete tasks independently, operating without the need for pre-programmed movements or human control. A Leap in Independence: Atlas Takes the Lead Since its unveiling in 2013, Atlas has undergone continuous improvements, transforming from a partially hydraulic machine to a fully electrified robot. This change alone marked a significant shift in its capabilities, providing better efficiency and flexibility. However, it’s not just about the hardware—Atlas now boasts the ability to think on its feet. A recent demonstration showcased the robot’s impressive ability to move objects autonomously. In the video, Atlas was given a list of locations where it needed to place engine parts. With this simple instruction, the robot set to work, moving the pieces with remarkable fluidity and precision. This isn’t just about lifting heavy objects. Atlas has been designed to navigate and adapt to changing environments. The use of machine learning has strengthened its ability to perceive and interact with the world around it. Through enhanced vision systems, Atlas can analyze its surroundings and adjust its actions accordingly. For example, when it encountered difficulty in placing one of the parts, Atlas immediately recalibrated its movements, showing an impressive level of adaptability. Moving Beyond Pre-Programmed Actions Boston Dynamics has emphasized that Atlas functions entirely on its own, without any pre-programmed sequences or remote control. Unlike other robots, such as Tesla’s Optimus, which rely on human operators for guidance, Atlas operates with complete autonomy. Every action Atlas performs is generated in real-time, allowing it to interact with its environment in a more dynamic and responsive This fully autonomous functionality represents a significant leap in humanoid robotics, with Atlas now able to handle tasks that require not only dexterity but also decision-making based on its immediate surroundings. Such capabilities bring us closer to robots that can assist in more complex and unpredictable environments. A Promising Partnership with Toyota Research Institute The latest developments in Atlas’ autonomy come shortly after a groundbreaking partnership between Boston Dynamics and the Toyota Research Institute (TRI). This collaboration seeks to combine cutting-edge research in both robotics and artificial intelligence. According to Robert Playter, CEO of Boston Dynamics, the partnership is about leveraging the strengths of both organizations to tackle real-world challenges and build robots capable of solving complex problems. In this collaboration, the teams aim to enhance Atlas even further, with TRI planning to integrate its own machine learning systems to expand Atlas’ capabilities. The goal is to allow the robot to not only complete more complex tasks but also learn from its experiences and adapt to new challenges. However, it’s worth noting that the recent demonstration of Atlas’ autonomous abilities does not seem to reflect any immediate outcomes from this partnership, though it’s clear that such advancements will continue in the near future. The Road Ahead for Atlas As Atlas continues to evolve, its potential applications seem almost limitless. Whether it’s performing tasks in hazardous environments, assisting with complex industrial processes, or even helping in search-and-rescue missions, this humanoid robot is proving that autonomy is no longer a distant dream. By combining advanced robotic design with machine learning, Boston Dynamics has set a new standard for what robots can achieve. Atlas has become more than just an agile, physically impressive machine—it’s now a symbol of what the future of autonomous robots could look like. And with ongoing advancements like the Toyota partnership, it’s only a matter of time before Atlas—and robots like it—become an integral part of our everyday lives.
An inside opinion said Boston Dynamics are garbage. They are cute show toys, but thats about all they are good for. A certain themepark opening up very soon will be using boston dynamics robot to play a couple characters. The robotic characters will interact with people using chatgpt like AI. Apparently they are prone to failure and have decided to go inhouse like their next door rival. That said, they are still suppose to impress. I waited well over a year to ride Tron to no disappointment. I am eager to see this finished product.
ChatGPT can now write Erotica The document [guidelines] reveal a notable shift in OpenAI's content policies, particularly around "sensitive" content like erotica and gore—allowing this type of content to be generated without warnings in "appropriate contexts." https://arstechnica.com/ai/2025/02/...erotica-as-openai-eases-up-on-ai-paternalism/ [Doomed]