One interesting consequence of the rise of LLMs: there's more demand for tools that handle untrusted input.
Arbitrary HTML+JS can be safely run in a browser. Lean can check an arbitrary proof.
These work really well with an LLM that can be wrong, but sometimes gives exactly what you want. Are there other tools in this family?