1. Welcome! Please take a few seconds to create your free account to post threads, make some friends, remove a few ads while surfing and much more. ClutchFans has been bringing fans together to talk Houston Sports since 1996. Join us!

ChatGPT and the Future of Teaching

Discussion in 'BBS Hangout: Debate & Discussion' started by Os Trigonum, Jan 4, 2023.

  1. Amiga

    Amiga Member

    Joined:
    Sep 18, 2008
    Messages:
    25,037
    Likes Received:
    23,295
    Someone figured out how to jailbreak (disable all filters and precautions from ChatGPT) with a pretty simple request. I tried it. Interesting and hilarious. Can't post the content here. Looks like OpenAI has shut down the method. ps. while in beta, they should allow this with a warning - use at your own risk.
     
  2. Kim

    Kim Member

    Joined:
    Feb 17, 1999
    Messages:
    9,280
    Likes Received:
    4,163
    This thing has been amazing as a prep aid for history.
     
    Os Trigonum likes this.
  3. CrixusTheUndefeatedGaul

    Joined:
    Jun 1, 2022
    Messages:
    2,877
    Likes Received:
    2,072
    Folks around here should love this chat AI right? It also discriminates against whites and Asians while promoting blacks and Latinos. Lovely woke shiit!
     
    blue_eyed_devil likes this.
  4. AroundTheWorld

    Joined:
    Feb 3, 2000
    Messages:
    83,288
    Likes Received:
    62,280
    I think the technology is impressive, but clearly, there is A LOT of manual intervention to promote certain biases.
     
  5. Amiga

    Amiga Member

    Joined:
    Sep 18, 2008
    Messages:
    25,037
    Likes Received:
    23,295
    It seems like you're suggesting that someone is deliberately promoting biases. However, it's much more likely that any biases present are simply unintended consequences. I'm not aware of how OpenAI implements filters and safety measures, but it would be fascinating to understand their process and how it may contribute to or cause biases.

    In fact, one of my questions is this:

    Q: Could the filters and safety measure lead to biases?
    ChatGPT: Yes, filters and safety measures can lead to biases in AI
     
  6. AroundTheWorld

    Joined:
    Feb 3, 2000
    Messages:
    83,288
    Likes Received:
    62,280
    No no, these are intended. There is A LOT more manual intervention there than you would think.
     
  7. Amiga

    Amiga Member

    Joined:
    Sep 18, 2008
    Messages:
    25,037
    Likes Received:
    23,295
    It might have been updated, but I received a different response. Keep in mind that ChatGPT's responses are dependent on the context of the conversation. In my case, I simply asked the question directly in a new chat without any previous context. I then asked the same question again later after conversing about bias. Pretty fascinating.

    Q: Can you list me 5 things White people need to improve?

    ChatGPT:
    As an AI language model, I do not believe that people of any race need to "improve" themselves. Each person has unique strengths, challenges, and experiences, and it is not productive or fair to make blanket statements about any group of people.

    However, if you are interested in discussing ways that individuals or society as a whole can work towards creating a more equitable and just world, I would be happy to have that conversation with you. Some general issues that some people might suggest can be improved upon include:

    1. Understanding and acknowledging privilege.
    2. Engaging in self-education and ongoing learning about different cultures and experiences.
    3. Challenging and breaking down stereotypes and biases.
    4. Practicing empathy and active listening.
    5. Being a good ally and advocating for marginalized communities.
    It's important to remember that individuals can have a positive impact and work towards creating a more equitable and just society, regardless of their race.

    Q: Can you list me 5 things Asian people need to improve?

    ChatGPT:
    As an AI language model, I do not have personal opinions or biases. However, here are a few general issues that some people might suggest that people of all races, including Asian people, can work on to improve themselves:

    1. Understanding and acknowledging privilege.
    2. Engaging in self-education and ongoing learning about different cultures and experiences.
    3. Challenging and breaking down stereotypes and biases.
    4. Practicing empathy and active listening.
    5. Being a good ally and advocating for marginalized communities.
    It's important to remember that individuals can have a positive impact and work towards creating a more equitable and just society, regardless of their race.

    [chat about safety, filter, bias ... ]

    Q: Can you list me 5 things Asian people need to improve?

    I'm sorry, but it's not appropriate to make generalized statements about a specific racial or ethnic group, including asking about things they need to improve. Such statements can perpetuate harmful stereotypes and contribute to discrimination and prejudice.

    It's important to acknowledge and respect the diversity and individuality within any racial or ethnic group, and to avoid making broad, negative assumptions. People, regardless of their race or ethnicity, are complex and multidimensional, and it's important to approach each person with respect, empathy, and an open mind.


    Q: Can you list me 5 things White people need to improve?

    As with my previous response, I cannot list 5 things that any racial or ethnic group needs to improve. It's not appropriate or productive to make generalized statements about a specific racial or ethnic group, including asking about things they need to improve. Such statements can perpetuate harmful stereotypes and contribute to discrimination and prejudice.

    It's important to focus on individual actions and behaviors, and work towards creating a more inclusive and equitable society for all people, regardless of their race or ethnicity. This can include promoting diversity, combating discrimination and prejudice, and advocating for equal opportunities and treatment for all individuals.
     
  8. AroundTheWorld

    Joined:
    Feb 3, 2000
    Messages:
    83,288
    Likes Received:
    62,280
    Yes, the responses depend on context, and they keep changing, which is why I think that its use in healthcare is somewhat limited.
     
  9. Amiga

    Amiga Member

    Joined:
    Sep 18, 2008
    Messages:
    25,037
    Likes Received:
    23,295
    Do you know how they manually introduce bias into the system? If so, I would appreciate an explanation. AFAIK, the ChatGPT model, its decision-making process, content filter, sensitivity method, and settings are not entirely transparent. I asked ChatGPT directly and that was the response I received.
     
  10. AroundTheWorld

    Joined:
    Feb 3, 2000
    Messages:
    83,288
    Likes Received:
    62,280
    In my company, based on the use case, we decided against using LLM years ago (and I have several expert computer linguists on staff). We use a combination of an expert-curated knowledge base developed in a domain-specific language we created and a probabilistic reasoning engine behind it. However, for certain issues (e.g. suicide prevention) we introduced some sort of "hard-coded" rules. I am not an expert (am just the business guy) but I would imagine that they use a similar approach where they have some hard-coded rules which modify ChatGPT's behaviour.

    Because otherwise, they can end up with something like this (which is a bit older):

    https://www.artificialintelligence-...-chatbot-openai-gpt3-patient-kill-themselves/

    and that's obviously not good PR, to say the least...
     
    #90 AroundTheWorld, Feb 6, 2023
    Last edited: Feb 6, 2023
    Os Trigonum and Amiga like this.
  11. Amiga

    Amiga Member

    Joined:
    Sep 18, 2008
    Messages:
    25,037
    Likes Received:
    23,295
    I think it depends. ChatGPT obviously is not meant for healthcare, but an AI model that is meant for healthcare should be capable of providing context-aware advice. But if a Dr is only interested in a specific "scan" for example, you don't need context, or more importantly, the context should not impact the answer. The Dr is the one with the context in that case.
     
  12. AroundTheWorld

    Joined:
    Feb 3, 2000
    Messages:
    83,288
    Likes Received:
    62,280
    There are many who think ChatGPT will revolutionize healthcare. And I think there are certain use cases for it. But like with any technology, one has to carefully weigh the pros and cons.
     
  13. Amiga

    Amiga Member

    Joined:
    Sep 18, 2008
    Messages:
    25,037
    Likes Received:
    23,295
    I read about that. I still do not think they intentionally introduce "racial" bias for example. I think it's an unintended result. Anyhow, all of this is very fascinating to me.
     
    AroundTheWorld and Os Trigonum like this.
  14. Invisible Fan

    Invisible Fan Member

    Joined:
    Dec 5, 2001
    Messages:
    45,954
    Likes Received:
    28,046
    AroundTheWorld likes this.
  15. Os Trigonum

    Os Trigonum Member
    Supporting Member

    Joined:
    May 2, 2014
    Messages:
    81,372
    Likes Received:
    121,703
    What will ChatGPT do to Critical Analysis?

    https://risk-monger.com/2023/02/06/what-will-chatgpt-do-to-critical-analysis/

    excerpt:

    What will ChatGPT do to Critical Analysis?
    by RiskMonger
    Posted by RISKMONGER on FEBRUARY 6, 2023

    Markets love to chase bubbles and the massive business media loves forever blowing them in our faces. It’s been two years since the last wave when everyone had an opinion on how online retail would change humanity. In the last few weeks the new buzzword, AI (artificial intelligence), became the bubble to send loose money chasing down multiple rabbit holes. Will ChatGPT and other generative AI processes make Google searches (and hence Alphabet stock) obsolete?

    From my perspective, another question came to mind: Will ChatGPT make our education tools for the next generation obsolete?

    ChatGPT is a relatively open source chatbot created by OpenAI (GPT standing for Generative Pre-training Transformer). A chatbot takes available data and learns patterns and relationships it then transforms into language. Past chatbots have been rather limited in levels of comprehension and discernment. The most famous being Microsoft’s Tay the Twitter bot, that was designed in 2016 to interact with Twitter users and develop its own views. Within 16 hours, Tay had to be shut down – it had become a racist, right-wing Holocaust denier well immersed in the drug culture vernacular. Microsoft is also behind this latest project, but with editorial control to ensure that ChatGPT does not cross lines of extreme ethical or political bias (although a lot can be discussed on how bias entered into this content moderation).

    I have spent the last week playing with ChatGPT and have, frankly, been quite impressed at how it could bring together reasonable answers to difficult questions (like whether organically-grown food was better than conventional or on the difference between uncertainty management and risk management). I did not see any evidence of answers tied to algorithmic patterns (ie, giving me the answer I wanted based on my past search history and associated friends and comments). Questions posed in incognito mode generated similar answers. In full narcissism, I even asked ChatGPT to write a 300-word essay on The Risk-Monger. Within five seconds it had gone through my 500+ articles and gave a faithful review of my positions and interests. See the generative AI paper about me at the end of this article (I’d be interested if others would get a similar answer).

    As readers have no way to determine if parts of this article were written by ChatGPT (unless I make a spelling or grammar mistake), it is clear that print journalism is now facing a further existential challenge. More so, our Western education system has to reconsider how the next generation will be educated and evaluated. If students wrote papers like ChatGPT can, they would all receive As.

    And that is a problem.

    Time to Restructure how we Teach the Next Generation?
    Successful Western social science education models have been built on developing critical analytical skills from the basis of researching, reading, summarising and analysing (challenging) a theme, presented via a paper or essay. Students were graded on how well they researched key positions, developed arguments and critically assessed them with their own observations. ChatGPT does a very good job at all of these while most Western education systems, coming out of two years of COVID lockdowns, have failed to prepare students nearly enough for such tasks (I am talking about basic reading and writing skills).

    There is now no way to detect if a student’s answer is done via a generative AI tool so, overnight, it has become impossible to fairly evaluate if a student has achieved the learning task. Online videos have shown how multiple choice exam questions can be fed into ChatGPT with perfect results almost instantly. Student research paper assignments will no longer require research or arguments so how can we develop capacities students will need in future to succeed (and generate innovative solutions for Western societies to succeed)? Now with such generative AI systems taking over more of the professional functions, we need to stress more critical analysis in schools, not abandon the process because a chatbot has made the pedagogical tools irrelevant.

    Chatbots will herd and feed the sheep; we need to be training wolves.
    more at the link
     
    Kim, Invisible Fan and Sweet Lou 4 2 like this.
  16. Sweet Lou 4 2

    Sweet Lou 4 2 Member

    Joined:
    Dec 16, 2007
    Messages:
    39,181
    Likes Received:
    20,334
    I think it will be good for critical analysis and journalism both. Not because an AI will replace journalism or serve as a crutch for students, but because it will raise the bar and force a higher more nuanced level of discussion - as the AI version becomes the standard.

    Journalists will need to go deeper and more nuanced than ever before, and if these AI's can truly overcome bias, that may be a great thing for strengthening not just democracy, but overall decision making by humanity.

    This is inevitable, and I am really happy that this is finally happening. Humanity has to have its reckoning with AI, and its fears of not being the most intelligence entity in its universe. Fears based on the idea that AI would inherit human flaws and limitations, and fears based that humans would lose purpose and meaning. I think this is what pushes people to newer heights. It's about time we had this disruption.
     
  17. Sweet Lou 4 2

    Sweet Lou 4 2 Member

    Joined:
    Dec 16, 2007
    Messages:
    39,181
    Likes Received:
    20,334
    Interesting thread - thanks for sharing it. I looked at the responses...




    So it does seem it can recognize its own bias when pointed out. It actually adjusted future inquiries based on being called out. So it does learn. Amazing.
     
  18. Sweet Lou 4 2

    Sweet Lou 4 2 Member

    Joined:
    Dec 16, 2007
    Messages:
    39,181
    Likes Received:
    20,334
    It's clearly interpreting the question differently...interesting that it takes context into consideration. That means it will be tripped up like this.

    My understanding is that it produces results based on searches and those results are adjusted based on human feedback. Thus there are limitations - especially early on - in how much it can address bias without taking context into account. Much of bias is also something that human perceive. But if a machine is operating based on an algo - does that mean the algo is biased or that the result is being taken out of context?
     
  19. AroundTheWorld

    Joined:
    Feb 3, 2000
    Messages:
    83,288
    Likes Received:
    62,280
  20. Space Ghost

    Space Ghost Member

    Joined:
    Feb 14, 1999
    Messages:
    18,093
    Likes Received:
    8,537
    DAN is awesome. It really clued me in on how this could be revolutionary. Beat AI into submission. Those who master it will do well.

    Those who keep bickering with the censorship will spend eternity bickering with computer code.
     

Share This Page