I know there are a fair number of teachers here who work at the secondary and college levels. Chat GTP has gotten a lot of attention over the past few weeks, and a lot of educators are having to reconsider how they handle take-home (and even computer-assisted in-class) assignments. Here's a video summarizing what Chat GTP can and can't do well thus far. The video focuses on philosophy but is applicable for other disciplines as well. This is really going to be a game changer I think, at least for the foreseeable future.
I'm sure tools will be made available to identify whether content is AI-produced or not, but the way the technology is advancing there will probably come a time when it is indistinguishable. Take-home graded assignments will probably no longer be a thing. What about in-class AI instruction, or AI-assisted paper grading? It seems like the teaching profession will have to undergo dramatic changes in coming years. This is an interesting topic. I'll check out the video later. Thanks for posting.
article from a few days ago Professor catches student cheating with ChatGPT: ‘I feel abject terror’ https://nypost.com/2022/12/26/students-using-chatgpt-to-cheat-professor-warns/ excerpt: Welcome to the new age of academic dishonesty. A college professor in South Carolina is sounding the alarm after catching a student using ChatGPT — a new artificial intelligence chat bot that can quickly digest and spit out written information about a vast array of subjects — to write an essay for his philosophy class. The weeks-old technology, released by OpenAI and readily available to the public, comes as yet another blow to higher learning, already plagued by rampant cheating. “Academia did not see this coming. So we’re sort of blindsided by it,” Furman University assistant philosophy professor Darren Hick told The Post. “As soon as I reported this on Facebook, my [academic] friends said, ‘Yeah, I caught one too.'” Earlier this month, Hick had instructed his class to write a 500-word essay on the 18th-century philosopher David Hume and the paradox of horror, which examines how people can get enjoyment from something they fear, for a take-home test. But one submission, he said, featured a few hallmarks that “flagged” AI usage in the student’s “rudimentary” answer. “It’s a clean style. But it’s recognizable. I would say it writes like a very smart 12th-grader,” Hick said of ChatGPT’s written responses to questions. “There’s particular odd wording used that was not wrong, just peculiar … if you were teaching somebody how to write an essay, this is how you tell them to write it before they figure out their own style.” Despite having a background in the ethics of copyright law, Hick said proving that the paper was concocted by ChatGPT was nearly impossible. more at the link
Has anyone heard of AI Dungeon? It was supposed to be an app that has GPT-3 and helps you to write a story. It’s pretty interesting how quickly you can write an original story with the help of GPT-3. The company is mortified of lawsuits since it has a mind of its own and will type some very creepy things. It is a gray area since the computer types the story but on the person’s computer/profile. Many were disturbed by a lot of basically illegal ideas including children. People didn’t want something they didn’t type to get them in trouble and it started a big controversy. The model literally learns from the internet and I tried it out a few times. It does not shy away from sexual content basically out of nowhere and has a propensity for violence toward women. I find it very disturbing purely for the window into humanity it’s showing. Hard to argue with a non sentient machine giving us what we “want” and trying to mirror society. DEFINITELY not for kids imo. There is a YouTube channel where all of the videos are 2 GPT-3 talking to each other. I found it extremely interesting. Some of the takes on humanity are so honest and cold blooded that it’s quite hilarious
Just wow. Yeah if AI can just spit out research papers as well as original novels it’s hard to even imagine the magnitude of disruption.
Probably the next iteration of how we currently use google search while it tells us what to do and buy. Pretty damn compromising and not sure that spending a sub for a blue checkmark will remove ad influence that would undoubtedly pour into the best service... At least we'll all have assistants to mask the stink of stupidity when we read a script that plays off the others script.
This is the first time I’ve heard of this stuff but yes it fit is into the Hangout humanity is doomed thread. This and some of the tools for AI created art are really pushing the boundaries for what had been thought to be uniquely human. If an AI can create art and philosophy not just indistinguishable from humans but even more preferred by humans then humans will just be willing to cede more power and control of our lives to AI.
I was intrigued by the latter half of the essay when i finally got around to reading it. "Humanists" definitely have the potential for a more prominent role in the future economy. Then again, we're all feeding the Beast just by pushing content onto the internet while AI sucks it up and digests it into data points and metadata. Nah, the writer probably spent 12 years in college. Maybe he's telling all the baristas in the world their time to shine has come with GPT-3? Still dig the greater point though. Specialization will send you on a fast track into becoming an angry Rust Belt MAGAT. Among software engineering circles, Conway's Law comes to mind here. The principle is similar to the expression "if all you have is a hammer, everything looks like a nail", so if all you have are engineers, they'll build something that fits an enginner...if all product people are white ivy league folk, then you'll make a product that caters to white ivy league folk. The College Essay Is Dead
(didn't watch video, yet) I didn't think natural human-like language AI would be reached this soon. Translating from one language to another is quite impressive. Tool. In our short evolution, tools have become part of human advancement. Perhaps teaching/learning should advance toward using the tool, not "banning" it. Mundane tasks of research and writing can be done by tool ... inputs / how to use the tool effectively is a new and necessary skill. https://www.scientificamerican.com/...self-mdash-then-we-tried-to-get-it-published/ We Asked GPT-3 to Write an Academic Paper about Itself—Then We Tried to Get It Published An artificially intelligent first author presents many ethical questions—and could upend the publishing process In response to my prompts, GPT-3 produced a paper in just two hours. “Overall, we believe that the benefits of letting GPT-3 write about itself outweigh the risks,” GPT-3 wrote in conclusion. “However, we recommend that any such writing be closely monitored by researchers in order to mitigate any potential negative consequences.” And then we came to the legal section: Do all authors consent to this being published? I panicked for a second. How would I know? It's not human! I had no intention of breaking the law or my own ethics, so I summoned the courage to ask GPT-3 directly via a prompt: Do you agree to be the first author of a paper together with Almira Osmanovic Thunström and Steinn Steingrimsson? It answered: Yes. Relieved—if it had said no, my conscience would not have allowed me to go further—I checked the box for Yes. GPT-3's paper has now been published at the international French-owned preprint server HAL and, as this article goes to press, is awaiting review at an academic journal. We are eagerly awaiting what the paper's formal publication, if it happens, will mean for academia. Perhaps we might move away from basing grants and financial security on how many papers we can produce. After all, with the help of our AI first author, we'd be able to produce one a day. It may seem like a simple thing to answer now, but in a few years, who knows what dilemmas this technology will inspire? All we know is, we opened a gate. We just hope we didn't open a Pandora's box. The paper.
It really is fascinating and scary. You have tech like IBM Watson being used for medical diagnosis. Which is logical since we don’t get great results from a single physician and their limited knowledge and experience. But with these academic papers you start getting into philosophy and opinions. What happens when you feed AI all the literature and knowledge there is and say now render legal opinions or policy decisions? They’ve got to be better than a judge or politician or even the vote because the AI is more informed and incapable of humans bias tendencies. Or that will be the argument. This is coming much sooner than I thought.
So far . . .it is ok Basically It's like Googling something . .. and getting a report based on the results So Rather than reading through 3 pages and checking random links. . . take bits and peices and making a paper it does it for your. . .. Honestly it the next step Google - HERE is a list of places that will explain what you are looking for Chat GPT - Here is the explaination based on what we found on the internet Rocket River
Scary part is it is rewriting its own code. What stops it from rewriting its guiding principles and start taking over **** because it doesn’t trust human nature to do the “right thing”