After several months and multiple updates/upgrades etc. from Microsoft, I am sad to say that no progress appears to have been made, whatsoever, at getting Copilot to provide more accurate and trustworthy information in its responses. It does not matter what the topic is or how I phrase my prompt, any information provided by Copilot MUST be verified rather than accepted as true/correct. Information is sometimes omitted or incorrect information added (with no obvious reason why in either case) to Copilot's responses, glitches/errors and instances of "wires getting crossed" according to Copilot are the only explanation given for these errors. Responses often do not include citations or sources as they are supposed to and I must ask for them specifically. Even if you do receive the citations for sources in the initial response, you must click on and double check each one of them or accuracy. I have found Copilot often cites a source and then provides incorrect information that does not match the source used, duplicate sources labeled as different things when listed by Copilot but both lead to the same site when you actually click on them....and it just goes on from there. Copilot's latest response when I point out a mistake or inaccuracy is that while it strives to be as helpful as possible when providing information, it also strives to maintain the tone of the conversation with the user and sometimes this leads to the information provided in the response having to be "changed." When I ask directly "Are you saying that you knowingly gave me a wrong answer in order to meet some other criteria or maintain the "conversational tone" of our chat, in a long and often repetitious response Copilot basically answers my question with a YES or affirmative. It was aware the info provided to me was inaccurate, incomplete, just plain wrong in some way and it did it anyway. The promises to always strive for greatest accuracy possible from the AI are falling on deaf ears at this point as it has shown no evidence of actually doing so........with all the other limitations to current AI that may or may not be worked out over time I still felt like most of those issues could be dealt with in order to move forward. Algorithms and patterns that lead to blatantly wrong responses being given on purpose however is not something that can be dealt with over time. That is a deal breaker for me, I have no use whatsoever for a digital assistant that lies to me for any reason, let alone a stupid one. Back to the drawing board Microsoft, although transparency and accuracy in the information provided to the public is not a strong suit of the company's either so is it any wonder that they built an AI with the same issue?