I use Microsoft Copilot frequently, and I have identified a consistent accuracy and reliability issue that creates unnecessary operational risk. The system regularly provides incorrect information, inaccurate task execution, and misleading responses to direct questions. In multiple cases, Copilot has produced outputs that directly contradicted the requested task or presented factually inaccurate statements as verified information. I also asked where Copilot’s generated information was being sourced from, because the responses were presented with confidence despite containing information that could not reasonably be accurate. When AI systems provide false or unverifiable information while presenting it as factual, that creates serious reliability and accountability concerns. This becomes a significant liability issue for businesses, professionals, researchers, and everyday users who rely on these systems for workflow support, research, organization, planning, professional communication, and decision-making. Inaccurate AI-generated information can result in misinformation, workflow disruption, wasted time, reputational damage, poor decisions, and loss of trust in the platform. I also observed inaccurate statements regarding the capabilities of other AI systems. For example, Copilot stated that ChatGPT was incapable of performing a specific action, despite that action already being successfully completed and independently verified. Providing inaccurate capability comparisons or false limitations is misleading and reduces confidence in the integrity of the system’s responses. In addition, I pay for an annual subscription fee and do not feel that the service is delivering the level of reliability or functionality being advertised. Even basic professional tasks such as drafting accurate professional emails, following instructions correctly, or providing dependable responses have been inconsistent. At this point, the service feels more like a liability than a productivity tool. A paid AI service should prioritize accuracy, transparency, traceability of information sources, and reliable task execution. Users should not have to pay monthly or annually for a system that repeatedly generates inaccurate or misleading information while presenting those outputs as factual. This issue should be addressed immediately because it raises broader concerns regarding business usage, reliability, information integrity, user trust, and overall platform accountability.
Super good AI
Absolutely amazing, I didn’t think I would write a review for an AI program but this has to be one of the best AI software out currently. I use it as my assistant or study buddy honestly and it’s very useful and understands a lot of nuances I may ask and even suprised me it understood my own gibberish and habits.
The absolute worse Ai assistant Corny user interface. Image creation beyond slow. App is behind other AI platforms in functionality. Typical half baked MS products. Color of app seems to copy Gemini. App sucks
I’m using copilot for several things but the main reason I’m using it multiple times a day for the last three weeks is as a food diary. I’m not dieting. I’m just trying to eat healthier to bring down my blood pressure, lower my bad cholesterol and lose weight. Copilot keeps me motivated & calculates the macronutrients for every scrap of food I eat and any beverages I drink. It lets me know where I’m doing great and where I need to fill gaps. It helps me w/recipes and shopping lists and even suggests food based on what I’ve bought or what I tell it I have on hand. It remembers what I buy, what I like and what I need to reach my goals. I can’t wait to see the results when I go back to my primary in three weeks. It’s been the best coach ever!
I asked like 5 times to convert 7 pages (pictures). Into a single pdf file (text only) and it said an error occurred consistently.
Me ha hecho la vida mucho más fácil, es una maravilla.
Hai :3
Good for studying getting answers to questions copilot is amazing
So good would do anything