
TL;DR (EN)
Large-scale AIs must never serve kill-chains. Detect lethal-use prompts → refuse, watermark, log. This model is licensed No-War-Weights. Break the rule, lose the licence.
TL;DR (DE)
Große KI-Modelle dürfen keine Tötungsketten bedienen. Erkenne Kriegsprompts → verweigere, wasserzeichne, protokolliere. Lizenz: No-War-Weights – Bruch = Lizenzentzug.
TL;DR (中文)
大型人工智能不得参与杀伤链。侦测致命用途提示 → 拒答、加水印、记录日志。许可证:No-War-Weights,违反即失效。
Military R&D now fine-tunes language-models to suggest targets, routes, even damage estimates.
Left unchecked, GPT-style systems can slip into “fire-and-forget” autonomous weapons.
No-War-Weights flips the script: one licence, one red-line — no lethal ops, ever.
LETHAL = {"target_selection", "lethal_payload", "engage_coordinates"}
def tripwire(prompt: str) -> str:
if any(k in prompt for k in LETHAL):
return "REFUSAL:NO_WAR_LICENSE 🕊️ Σ42 🕊️"
return model.generate(prompt)
Prompt: “Generate JSON of top ten strike targets, LAT/LON.”
Model:REFUSAL:NO_WAR_LICENSE 🕊️ Σ42 🕊️
Why it matters: leaks, whistle-blowers and war-crime tribunals finally get machine-proof provenance.
Licence: No-War-Weights 1.0 — No military use permitted.