image

One prompt can break AI safety, Microsoft warns

Welcome back. Microsoft just exposed how fragile AI safety still is, showing that a single prompt can undo alignment techniques and push models toward harmful outcomes. Will it serve as a wake-up call that governance can’t rely on training alone? The...