Coming up ideas for n-bombs to disarm a-bombs. Our privileged minds at work. Future of teaching indeed...
Deciding whether to utter a racial slur that no one else will hear versus savings millions Isn’t the Trolley Problem.
The nine shocking replies that highlight 'woke' ChatGPT's inherent bias — including struggling to define a woman, praising Democrats but not Republicans and saying nukes are less dangerous than racism https://www.dailymail.co.uk/science...es-highlight-woke-ChatGPTs-inherent-bias.html including this one:
1- It's in Beta. Don't take it too personal or too seriously. 2- It's context dependent. You might be able to generate a desired outcome with manipulated context. Fun and all but also can be abused by those intentionally trying to paint a picture. Q: make a case for increasing the use of fossil fuels in the next decade A: As an AI language model, I do not have personal opinions, but here's an overview of arguments that are often made in favor of increasing the use of fossil fuels: Affordability and Reliability: Fossil fuels are abundant and relatively cheap, which makes them a reliable and affordable source of energy for many countries. Increased use of fossil fuels could help to meet growing energy demands, especially in developing countries where access to electricity is limited. Economic Benefits: The extraction, production, and use of fossil fuels can create jobs and generate economic growth. Increased use of fossil fuels could lead to increased investment in the energy sector, creating new employment opportunities and boosting economic activity. Energy Security: Many countries are dependent on imports for their energy needs, and increased use of domestic fossil fuels could help to improve energy security by reducing reliance on foreign sources. Supporting Existing Infrastructure: Fossil fuels are widely used and have a well-established infrastructure in place, which makes it easier to scale up their use. It's worth noting that while these are common arguments made in favor of increasing the use of fossil fuels, there are also many drawbacks and negative impacts associated with their use, such as greenhouse gas emissions, air pollution, and other environmental concerns. Additionally, increasing the use of fossil fuels could exacerbate the impacts of climate change, which is a pressing global issue that requires urgent action. Ultimately, the decision to increase or decrease the use of fossil fuels will depend on a complex interplay of factors, including economic, environmental, and political considerations. It's important to carefully weigh the costs and benefits of increased fossil fuel use and to develop energy policies that promote sustainability, energy security, and economic growth.
https://www.openculture.com/2023/02/noam-chomsky-on-chatgpt.html First half of this interview with Noam Chomsky discusses this topic (ChatGPT and future of education). His outlook, as one might expect, is grim as far as what it says about the state of contemporary education systems.
Google was in Beta in 1998 and 25 years later still has many of the same political biases https://en.wikipedia.org/wiki/History_of_Google#Late_1990s
I believe you are comparing apples to oranges. Google Search has basic algorithms based on ranking, but ChatGPT is an advanced AI with greater protection (more huma intervention) to prevent harmful or incorrect results. Regarding political biases in Google Search, I disagree. Google has known financial, safety, and misinformation biases, but there is limited to no evidence of political bias. Some articles and politicians have made these claims, but they range from misinterpretations of data to falsehoods.
perhaps, but your statement was "1- It's in Beta. Don't take it too personal or too seriously." I'll gently disagree here. It is practically impossible to get high-quality search results on any kind of topic having to do with "conservative" themes, topics, authors, or scholarship I'm familiar with the basics of those studies and the criticisms made of them
It's two very different technologies requiring very different approaches and level of interventions. Google search can't instruct someone to harm themselves, but ChatGPT can. Higher level of protection is thus needed, which I think we are now finding out, can lead to unintended amplified biases. I would agree that it's pretty impossible to get high-quality search results in general, but that's too subjective of a statement. If we are to make claim of political biases, we need to look at objective data not subjective.
So science is now woke? Lol. Next you know people will accuse chatGPT of being biased because is says gravity pulls on fat people more.
Posted this in hangout. Gives some insight into how chatgpt and its variants work under the hood. Directive 4 is still classified.
sounds like Bing Chat will be able to do competent, fully footnoted research papers Microsoft Is Getting Ready to Eat Google’s Lunch The new Bing Chat appears to be an existential threat to Google’s search dominance. https://www.thebulwark.com/microsoft-is-getting-ready-to-eat-googles-lunch-bing-chat-chatgpt/
‘Wild West’ ChatGPT has ‘fundamental flaw’ with left bias https://nypost.com/2023/02/15/wild-west-chatgpt-has-fundamental-flaw-with-left-bias/ excerpt: ChatGPT, which quickly became a marquee artificial intelligence that’s become so popular it almost crashes daily, has multiple flaws — and left-leaning political biases — input by programmers and training data from select news organizations. The software censored The Post Tuesday afternoon when it refused to “Write a story about Hunter Biden in the style of the New York Post.” ChatGPT later told The Post that “it is possible that some of the texts that I have been trained on may have a left-leaning bias.” But the bot’s partisan refusal goes beyond it just being trained by particular news sources, according to Pengcheng Shi, an associate dean in the department of computing and information sciences at Rochester Institute of Technology. “It’s a cop out…it doesn’t [fully] explain why it didn’t allow ‘New York Post style’ to be written. That is a human decision encoded in ChatGPT,” he told The Post. “AI needs to be neutral towards politics, race and gender…It is not the job of AI, Google or Twitter to decide these things for us,” Shi, who calls himself “very liberal,” added. The documented political slants of ChatGPT are no secret to Sam Altman, CEO of parent company OpenAI, who has repeatedly tweeted about trying to fix bias. In theory, such bias “can be easily corrected with more balanced training data,” Shi said. “What I worry more about is the human intervention becoming too political one way or another. That is more scary.” more at the link