If you have to argue with it, then you already know more than the AI. And if you knew less, you'd have no idea if the AI is saying is helpful or not.
Some AI are actually starting to do great at things where a ton has been written down. Like science, law, mathematics, coding, history, literature, etc.
Some AI, like ChatGPT, is written in such a way that you can always convince it. The sad thing is that prompt engineering actually starts to become a thing.
AI is quite dangerous because it is such a crutch. It is so easy to just ask AI and be satisfied with what it says. And now that is has become more advanced, it is much much harder to catch it making errors.
Additionally, we are moving slowly to a pace where AI isn't just some algorithm that guesses the next word. But a huge neural network trained on our collective texts, with emergent properties. Some people make claims that AI doesn't want to be turned off. That seems nonsense to me. But in the end our brains are also 'just' neural networks. And our consciousness, our emotions, our humanity, is also an emergent property. Just one shaped by evolution to have emotions, to have personality, to have consciousness. And a neural network is 100% trained to perform best at the test tasks. But at some point it does become big and advanced enough that there will be some new entity about it. Not something similar to personhood, or animalhood. But something entirely different.