The danger of AI is now in full sight. When you can claim that an AI can answer hidden truths based on some unknown, infallible source - when in reality it’s parroting someone else’s own beliefs - this is technological idolization.
It is no different than exalting someone like Fauci as a moral arbiter on viruses.
Except that AI can be used to create a moral arbiter on literally anything.
If you’re talking about that AI - It would not even pass a Turing Test. Here’s a wager on whether or not it even gets done in 9 years (from Turing Test Wiki)
"The Long Bet Project Bet Nr. 1 is a wager of $20,000 between Mitch Kapor (pessimist) and Ray Kurzweil (optimist) about whether a computer will pass a lengthy Turing test by the year 2029. During the Long Now Turing Test, each of three Turing test judges will conduct online interviews of each of the four Turing test candidates (i.e., the computer and the three Turing test human foils) for two hours each for a total of eight hours of interviews. The bet specifies the conditions in some detail."
Basically you need to fool the human into believing its talking to a bot to pass the test.