As LLMs and diffusion models power more applications, their safety alignment becomes critical. Our research shows that even minimal downstream fine‑tuning can weaken safeguards, raising a key question ...
"Safety alignment is only as robust as its weakest failure mode," Microsoft said in a blog accompanying the research. "Despite extensive work on safety post-training, it has been shown that models can ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results