I wrote this before coffee, I meant it will state it only has knowledge up until 2021, but you can ask it stuff like who owns Twitter and it will answer Elon, then you can counter that it should only know things up until 2021 and it will state Dorsey. I think they detailed they're feeding it more data/testing in an article etc. Anyway, I definitely agree with you on the coding stuff, it's been pretty impressive. I actually watched this video recently that you shared and thought it was pretty neat, the same guy went on to program one for sports betting too. I'd honestly like to try taking a small account and letting it trade to see what it can do. I also didn't mean to sound like I was hating on the capability of it, just that it definitely does better on more direct subjects like coding/tech questions etc. Anything with absolute answers.
This was last month but it came out in an article some newer updates were being tested, it also referred to the Queen passing, it's not doing it now but there was an article somewhere discussing this last month:
Google lost almost 10% of stock valuation because of this flub. Microsoft is going to eat Google's lunch.
I’m going to be implementing chatgpt up at work for a couple different use cases. Looking forward to getting to serve our computer overlords on a first hand basis.
My guess is that Google can match it before they lose market share. My completely uninformed take is that the GPT AI is probably very good, but it really is just an encapsulation of the entire internet. Which is awesome. But Google has got to have enough money and intelligence to match that within a couple of years. That said, even if they can't match it, they just need to launch something like it and then fix it as they go. Two big fish, but their brand is so strong don't think people will change.
If ChatGTP is MSFT's baby - I think MSFT has more cash on hand compared to Google. But I agree. Fast-to-market isn't usually that special in the long run.
Maybe when they update the dataset, the answer will change, but until then? Let the soup bowls fly ...
Rafael Stone is the current General Manager of the Houston Rockets, but I'm not aware of the term "hedge honcho" being used in reference to him or anyone else associated with the team. "Hedge honcho" is not a commonly used term or title in the world of basketball or professional sports.
Not sure if people remember this from 2016? Twitter taught Microsoft’s AI chatbot to be a racist ******* in less than a day I certainly don't have any expertise outside of personal interest, but I think people underestimate exactly how much work is involved in tuning the last 1% of the model. It took seven years and probably hundreds of millions of dollars, if not billions, to go from "pretty good in a lab" to being something you can expose to the world without it imploding. If they put out something obviously, comically flawed and it takes 7 years to get it right? By the time it works it'll be dead to public opinion and some third party will have ended up being the real competitor.
Tay wasn't based on ChatGPT, but of course, users are turning ChatGPT into DAN, and that's provided its own set of kookiness.... Give the mass populace something cool and there's always going to be knuckleheads that ruin it for everybody. lol.
DAN can be pretty entertaining though, at least on some of the crazy answers it will throw out. I liked this idea (not a DAN one): Unfortunately it basically got stuck in a loop after this, I liked the idea though.
Some of the recent conversation in this discussion thread makes me think of Eliza. https://en.wikipedia.org/wiki/Joseph_Weizenbaum Psychology simulation at MIT In 1966, he published a comparatively simple program called ELIZA, named after the ingenue in George Bernard Shaw's Pygmalion, which performed natural language processing. ELIZA was written in the SLIP programming language of Weizenbaum's own creation. The program applied pattern matching rules to statements to figure out its replies. (Programs like this are now called chatbots.) Driven by a script named DOCTOR, it was capable of engaging humans in a conversation which bore a striking resemblance to one with an empathic psychologist. Weizenbaum modeled its conversational style after Carl Rogers, who introduced the use of open-ended questions to encourage patients to communicate more effectively with therapists. He was shocked that his program was taken seriously by many users, who would open their hearts to it. Famously, when he was observing his secretary using the software - who was aware that it was a simulation - she asked Weizenbaum: "would you mind leaving the room please?".[3] Many hailed the program as a forerunner of thinking machines, a misguided interpretation that Weizenbaum's later writing would attempt to correct.[4]
I read where if ChatGPT had an IQ it would be around 90 while the average IQ range goes from 85 to 115. So, it's already hit the human average. If it's at 90 now, it won't be long before it hits 120, 140, and well beyond. Now, it's not true AI, but true AI is coming and probably faster than we care to acknowledge. In the meantime, we need to do some hard thinking about what AI will mean to life, meaning, work, wealth, politics, society, and humanity--by some projections we have only 10-12 years to grapple with these problems before they are manifest. The saving grace right now is that all the AI precursors are very bad at interacting with the physical world--they might be able to write something about plumbing or driving or gardening, but actually doing it is still a ways away. But it's coming. As far as school goes, ditch the papers and use handwritten essay exams--ink only, no laptops or tablets. You could also assign research papers dependent on primary sources and at least force students to learn how to do physical research before they plugged the info into a chatbot. (The programs that detect ChatGPT writings will eventually be used to make it better. The battle between chatbots and detection software will end in chatbots making leaps towards true AI and taking over the detection software.) And finally, we better be raising a bunch of ethical and humane STEM folks. I don't think History, Philosophy, and other Liberal Arts have ever been more important than right now. Biases can show up in code and just because something can be done doesn't mean it should be done.
GTP denies... ______ To put it simply, I'm just a computer program created by OpenAI to respond to text-based inputs. I don't have feelings, intelligence, or personal experiences. I can only respond to what I've been trained on and generate text based on that. So, you could say I'm smart in the sense that I have a lot of information at my disposal, but I wouldn't say I have an IQ score.