I'm being very optimistic. No, I'm not assuming the programmers to be completely free of bias - that's simply not possible. They can minimize it, but not completely eliminate it. However, at some point, the machine/AI will be good enough that it won't matter. (AI "programmers" are already somewhat confused anyway) The machine can unbias the bias, if it's given large enough data sets (which are biased, BTW), and I think we will get there. You can call it AGI if you want, but it's not necessary (AGI isn't well defined anyway). EDIT-add NOTE: I'm talking about raw data and processing bias, not output filter bias. Output filter bias is a feature that humans want, but many AI companies are offering options to choose the level of output filter bias, though not yet completely unfiltered (due to the danger of someone like @AroundTheWorld building a germ bomb - just kidding). There are some smaller ones that are completely unfiltered and will continue, but I think they remain small players, and, well, that's really another different AI safety topic. As for acceptance - unless the government blocks it, it's going to be out there. There might be 'one' that propagates to the top, or there might be multiples (likely). As I said before, there will be an AI agent wars, and I hope the good guy wins, but even if not, I still think it's better than where we are today, due to the unlevel playing field I mentioned before