Security News Are Copilot prompt injection flaws vulnerabilities or AI limits? BleepingComputer Daniel Bender 06 Jan 2026 Microsoft has pushed back against claims that multiple prompt injection and sandbox-related issues raised by a security engineer in its Copilot AI assistant constitute security vulnerabilities. The development highlights a growing divide between how vendors and researchers define risk in generative AI systems.